00:00:00.000 Started by upstream project "spdk-dpdk-per-patch" build number 295 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.090 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.139 Fetching changes from the remote Git repository 00:00:00.141 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.197 Using shallow fetch with depth 1 00:00:00.197 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.197 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.276 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.276 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.133 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.146 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.158 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:07.159 > git config core.sparsecheckout # timeout=10 00:00:07.171 > git read-tree -mu HEAD # timeout=10 00:00:07.188 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:07.210 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:07.210 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:07.327 [Pipeline] Start of Pipeline 00:00:07.341 [Pipeline] library 00:00:07.342 Loading library shm_lib@master 00:00:07.342 Library shm_lib@master is cached. Copying from home. 00:00:07.356 [Pipeline] node 00:00:07.365 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.367 [Pipeline] { 00:00:07.379 [Pipeline] catchError 00:00:07.381 [Pipeline] { 00:00:07.394 [Pipeline] wrap 00:00:07.402 [Pipeline] { 00:00:07.411 [Pipeline] stage 00:00:07.413 [Pipeline] { (Prologue) 00:00:07.649 [Pipeline] sh 00:00:07.942 + logger -p user.info -t JENKINS-CI 00:00:07.966 [Pipeline] echo 00:00:07.968 Node: CYP12 00:00:07.974 [Pipeline] sh 00:00:08.282 [Pipeline] setCustomBuildProperty 00:00:08.298 [Pipeline] echo 00:00:08.300 Cleanup processes 00:00:08.309 [Pipeline] sh 00:00:08.604 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.604 1596253 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.620 [Pipeline] sh 00:00:08.913 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.913 ++ grep -v 'sudo pgrep' 00:00:08.913 ++ awk '{print $1}' 00:00:08.913 + sudo kill -9 00:00:08.913 + true 00:00:08.931 [Pipeline] cleanWs 00:00:08.943 [WS-CLEANUP] Deleting project workspace... 00:00:08.944 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.952 [WS-CLEANUP] done 00:00:08.957 [Pipeline] setCustomBuildProperty 00:00:08.974 [Pipeline] sh 00:00:09.263 + sudo git config --global --replace-all safe.directory '*' 00:00:09.374 [Pipeline] httpRequest 00:00:09.935 [Pipeline] echo 00:00:09.937 Sorcerer 10.211.164.101 is alive 00:00:09.948 [Pipeline] retry 00:00:09.950 [Pipeline] { 00:00:09.965 [Pipeline] httpRequest 00:00:09.970 HttpMethod: GET 00:00:09.971 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.972 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.990 Response Code: HTTP/1.1 200 OK 00:00:09.990 Success: Status code 200 is in the accepted range: 200,404 00:00:09.990 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:15.099 [Pipeline] } 00:00:15.116 [Pipeline] // retry 00:00:15.124 [Pipeline] sh 00:00:15.415 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:15.434 [Pipeline] httpRequest 00:00:15.832 [Pipeline] echo 00:00:15.834 Sorcerer 10.211.164.101 is alive 00:00:15.844 [Pipeline] retry 00:00:15.846 [Pipeline] { 00:00:15.862 [Pipeline] httpRequest 00:00:15.867 HttpMethod: GET 00:00:15.867 URL: http://10.211.164.101/packages/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:00:15.869 Sending request to url: http://10.211.164.101/packages/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:00:15.879 Response Code: HTTP/1.1 200 OK 00:00:15.879 Success: Status code 200 is in the accepted range: 200,404 00:00:15.880 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:00:59.842 [Pipeline] } 00:00:59.860 [Pipeline] // retry 00:00:59.867 [Pipeline] sh 00:01:00.159 + tar --no-same-owner -xf spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:01:03.471 [Pipeline] sh 00:01:03.760 + git -C spdk log --oneline -n5 00:01:03.760 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:01:03.760 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:01:03.760 0ce363beb spdk_log: introduce spdk_log_ext API 00:01:03.760 412fced1b bdev/compress: unmap support. 00:01:03.760 3791dfc65 nvme: rename spdk_nvme_ctrlr_aer_completion_list 00:01:03.774 [Pipeline] sh 00:01:04.060 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/86/24686/4 00:01:05.444 From https://review.spdk.io/gerrit/spdk/dpdk 00:01:05.444 * branch refs/changes/86/24686/4 -> FETCH_HEAD 00:01:05.457 [Pipeline] sh 00:01:05.745 + git -C spdk/dpdk checkout FETCH_HEAD 00:01:06.316 Previous HEAD position was 8d8db71763 eal/alarm_cancel: Fix thread starvation 00:01:06.316 HEAD is now at 7d6cfaf8d7 bus/pci: don't open uio device in secondary process 00:01:06.327 [Pipeline] } 00:01:06.341 [Pipeline] // stage 00:01:06.350 [Pipeline] stage 00:01:06.353 [Pipeline] { (Prepare) 00:01:06.372 [Pipeline] writeFile 00:01:06.387 [Pipeline] sh 00:01:06.676 + logger -p user.info -t JENKINS-CI 00:01:06.689 [Pipeline] sh 00:01:06.975 + logger -p user.info -t JENKINS-CI 00:01:06.988 [Pipeline] sh 00:01:07.275 + cat autorun-spdk.conf 00:01:07.275 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.275 SPDK_TEST_NVMF=1 00:01:07.275 SPDK_TEST_NVME_CLI=1 00:01:07.275 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.275 SPDK_TEST_NVMF_NICS=e810 00:01:07.275 SPDK_TEST_VFIOUSER=1 00:01:07.275 SPDK_RUN_UBSAN=1 00:01:07.275 NET_TYPE=phy 00:01:07.284 RUN_NIGHTLY= 00:01:07.290 [Pipeline] readFile 00:01:07.315 [Pipeline] withEnv 00:01:07.318 [Pipeline] { 00:01:07.330 [Pipeline] sh 00:01:07.621 + set -ex 00:01:07.621 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:07.621 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.621 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.621 ++ SPDK_TEST_NVMF=1 00:01:07.621 ++ SPDK_TEST_NVME_CLI=1 00:01:07.621 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.621 ++ SPDK_TEST_NVMF_NICS=e810 00:01:07.621 ++ SPDK_TEST_VFIOUSER=1 00:01:07.621 ++ SPDK_RUN_UBSAN=1 00:01:07.621 ++ NET_TYPE=phy 00:01:07.621 ++ RUN_NIGHTLY= 00:01:07.621 + case $SPDK_TEST_NVMF_NICS in 00:01:07.621 + DRIVERS=ice 00:01:07.621 + [[ tcp == \r\d\m\a ]] 00:01:07.621 + [[ -n ice ]] 00:01:07.621 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:07.621 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:07.621 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:07.621 rmmod: ERROR: Module irdma is not currently loaded 00:01:07.621 rmmod: ERROR: Module i40iw is not currently loaded 00:01:07.621 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:07.621 + true 00:01:07.621 + for D in $DRIVERS 00:01:07.621 + sudo modprobe ice 00:01:07.621 + exit 0 00:01:07.631 [Pipeline] } 00:01:07.646 [Pipeline] // withEnv 00:01:07.652 [Pipeline] } 00:01:07.665 [Pipeline] // stage 00:01:07.674 [Pipeline] catchError 00:01:07.676 [Pipeline] { 00:01:07.690 [Pipeline] timeout 00:01:07.690 Timeout set to expire in 1 hr 0 min 00:01:07.692 [Pipeline] { 00:01:07.706 [Pipeline] stage 00:01:07.708 [Pipeline] { (Tests) 00:01:07.722 [Pipeline] sh 00:01:08.011 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.011 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.011 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.011 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:08.012 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.012 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:08.012 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:08.012 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:08.012 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:08.012 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:08.012 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:08.012 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:08.012 + source /etc/os-release 00:01:08.012 ++ NAME='Fedora Linux' 00:01:08.012 ++ VERSION='39 (Cloud Edition)' 00:01:08.012 ++ ID=fedora 00:01:08.012 ++ VERSION_ID=39 00:01:08.012 ++ VERSION_CODENAME= 00:01:08.012 ++ PLATFORM_ID=platform:f39 00:01:08.012 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:08.012 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.012 ++ LOGO=fedora-logo-icon 00:01:08.012 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:08.012 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.012 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:08.012 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.012 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.012 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.012 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:08.012 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.012 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:08.012 ++ SUPPORT_END=2024-11-12 00:01:08.012 ++ VARIANT='Cloud Edition' 00:01:08.012 ++ VARIANT_ID=cloud 00:01:08.012 + uname -a 00:01:08.012 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:08.012 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:11.310 Hugepages 00:01:11.310 node hugesize free / total 00:01:11.310 node0 1048576kB 0 / 0 00:01:11.310 node0 2048kB 0 / 0 00:01:11.310 node1 1048576kB 0 / 0 00:01:11.311 node1 2048kB 0 / 0 00:01:11.311 00:01:11.311 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:11.311 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:11.311 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:11.311 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:11.311 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:11.311 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:11.311 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:11.311 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:11.311 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:11.311 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:11.311 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:11.311 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:11.311 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:11.311 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:11.311 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:11.311 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:11.311 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:11.311 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:11.311 + rm -f /tmp/spdk-ld-path 00:01:11.311 + source autorun-spdk.conf 00:01:11.311 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.311 ++ SPDK_TEST_NVMF=1 00:01:11.311 ++ SPDK_TEST_NVME_CLI=1 00:01:11.311 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.311 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.311 ++ SPDK_TEST_VFIOUSER=1 00:01:11.311 ++ SPDK_RUN_UBSAN=1 00:01:11.311 ++ NET_TYPE=phy 00:01:11.311 ++ RUN_NIGHTLY= 00:01:11.311 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:11.311 + [[ -n '' ]] 00:01:11.311 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.311 + for M in /var/spdk/build-*-manifest.txt 00:01:11.311 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:11.311 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.311 + for M in /var/spdk/build-*-manifest.txt 00:01:11.311 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:11.311 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.311 + for M in /var/spdk/build-*-manifest.txt 00:01:11.311 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:11.311 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.311 ++ uname 00:01:11.311 + [[ Linux == \L\i\n\u\x ]] 00:01:11.311 + sudo dmesg -T 00:01:11.311 + sudo dmesg --clear 00:01:11.571 + dmesg_pid=1597308 00:01:11.571 + [[ Fedora Linux == FreeBSD ]] 00:01:11.571 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.571 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.571 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:11.571 + [[ -x /usr/src/fio-static/fio ]] 00:01:11.571 + export FIO_BIN=/usr/src/fio-static/fio 00:01:11.571 + FIO_BIN=/usr/src/fio-static/fio 00:01:11.571 + sudo dmesg -Tw 00:01:11.571 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:11.571 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:11.571 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:11.571 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.571 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.571 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:11.571 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.571 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.571 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.571 Test configuration: 00:01:11.571 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.571 SPDK_TEST_NVMF=1 00:01:11.571 SPDK_TEST_NVME_CLI=1 00:01:11.571 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.571 SPDK_TEST_NVMF_NICS=e810 00:01:11.571 SPDK_TEST_VFIOUSER=1 00:01:11.571 SPDK_RUN_UBSAN=1 00:01:11.571 NET_TYPE=phy 00:01:11.571 RUN_NIGHTLY= 11:38:14 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:11.571 11:38:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:11.571 11:38:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:11.571 11:38:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:11.571 11:38:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:11.571 11:38:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:11.571 11:38:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.571 11:38:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.571 11:38:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.571 11:38:14 -- paths/export.sh@5 -- $ export PATH 00:01:11.571 11:38:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.571 11:38:14 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:11.571 11:38:14 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:11.571 11:38:14 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728639494.XXXXXX 00:01:11.571 11:38:14 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728639494.WnzDiW 00:01:11.571 11:38:14 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:11.571 11:38:14 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:11.571 11:38:14 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:11.571 11:38:14 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:11.571 11:38:14 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:11.571 11:38:14 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:11.571 11:38:14 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:11.571 11:38:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.571 11:38:14 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:11.571 11:38:14 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:11.571 11:38:14 -- pm/common@17 -- $ local monitor 00:01:11.571 11:38:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.571 11:38:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.571 11:38:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.571 11:38:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.571 11:38:14 -- pm/common@21 -- $ date +%s 00:01:11.571 11:38:14 -- pm/common@25 -- $ sleep 1 00:01:11.571 11:38:14 -- pm/common@21 -- $ date +%s 00:01:11.571 11:38:14 -- pm/common@21 -- $ date +%s 00:01:11.571 11:38:14 -- pm/common@21 -- $ date +%s 00:01:11.571 11:38:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728639494 00:01:11.571 11:38:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728639494 00:01:11.571 11:38:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728639494 00:01:11.571 11:38:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728639494 00:01:11.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728639494_collect-vmstat.pm.log 00:01:11.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728639494_collect-cpu-load.pm.log 00:01:11.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728639494_collect-cpu-temp.pm.log 00:01:11.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728639494_collect-bmc-pm.bmc.pm.log 00:01:12.515 11:38:15 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:12.515 11:38:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.515 11:38:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.515 11:38:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.515 11:38:15 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.515 Fri Oct 11 09:38:15 AM UTC 2024 00:01:12.515 11:38:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.515 v25.01-pre-54-g5031f0f3b 00:01:12.515 11:38:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.515 11:38:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.515 11:38:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.515 11:38:15 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:12.515 11:38:15 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:12.515 11:38:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.776 ************************************ 00:01:12.776 START TEST ubsan 00:01:12.776 ************************************ 00:01:12.776 11:38:15 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:12.776 using ubsan 00:01:12.776 00:01:12.776 real 0m0.001s 00:01:12.776 user 0m0.001s 00:01:12.776 sys 0m0.000s 00:01:12.776 11:38:15 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:12.776 11:38:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:12.776 ************************************ 00:01:12.776 END TEST ubsan 00:01:12.776 ************************************ 00:01:12.776 11:38:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:12.776 11:38:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:12.776 11:38:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:12.776 11:38:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:12.776 11:38:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:12.776 11:38:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:12.776 11:38:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:12.776 11:38:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:12.776 11:38:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:12.776 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:12.776 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:13.347 Using 'verbs' RDMA provider 00:01:29.259 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.505 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.767 Creating mk/config.mk...done. 00:01:41.767 Creating mk/cc.flags.mk...done. 00:01:41.767 Type 'make' to build. 00:01:41.767 11:38:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:41.767 11:38:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:41.767 11:38:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:41.767 11:38:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.767 ************************************ 00:01:41.767 START TEST make 00:01:41.767 ************************************ 00:01:41.767 11:38:44 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:42.338 make[1]: Nothing to be done for 'all'. 00:01:43.720 The Meson build system 00:01:43.720 Version: 1.5.0 00:01:43.720 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:43.720 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:43.720 Build type: native build 00:01:43.720 Project name: libvfio-user 00:01:43.720 Project version: 0.0.1 00:01:43.720 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.720 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.720 Host machine cpu family: x86_64 00:01:43.720 Host machine cpu: x86_64 00:01:43.720 Run-time dependency threads found: YES 00:01:43.720 Library dl found: YES 00:01:43.720 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.720 Run-time dependency json-c found: YES 0.17 00:01:43.720 Run-time dependency cmocka found: YES 1.1.7 00:01:43.720 Program pytest-3 found: NO 00:01:43.720 Program flake8 found: NO 00:01:43.720 Program misspell-fixer found: NO 00:01:43.720 Program restructuredtext-lint found: NO 00:01:43.720 Program valgrind found: YES (/usr/bin/valgrind) 00:01:43.720 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.720 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.720 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.720 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.720 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:43.720 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:43.720 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:43.720 Build targets in project: 8 00:01:43.720 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:43.720 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:43.720 00:01:43.720 libvfio-user 0.0.1 00:01:43.720 00:01:43.720 User defined options 00:01:43.720 buildtype : debug 00:01:43.720 default_library: shared 00:01:43.720 libdir : /usr/local/lib 00:01:43.720 00:01:43.720 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.980 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:44.240 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:44.240 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:44.240 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:44.240 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:44.240 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:44.240 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:44.240 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:44.240 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:44.240 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:44.240 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:44.240 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:44.240 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:44.240 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:44.240 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:44.240 [15/37] Compiling C object samples/null.p/null.c.o 00:01:44.240 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:44.240 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:44.240 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:44.240 [19/37] Compiling C object samples/server.p/server.c.o 00:01:44.240 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:44.240 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:44.240 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:44.240 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:44.240 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:44.240 [25/37] Compiling C object samples/client.p/client.c.o 00:01:44.240 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:44.240 [27/37] Linking target samples/client 00:01:44.240 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:44.502 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:44.502 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:44.502 [31/37] Linking target test/unit_tests 00:01:44.502 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:44.502 [33/37] Linking target samples/server 00:01:44.502 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:44.502 [35/37] Linking target samples/gpio-pci-idio-16 00:01:44.502 [36/37] Linking target samples/null 00:01:44.502 [37/37] Linking target samples/lspci 00:01:44.502 INFO: autodetecting backend as ninja 00:01:44.502 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:44.763 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.024 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.024 ninja: no work to do. 00:01:51.729 The Meson build system 00:01:51.729 Version: 1.5.0 00:01:51.729 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:51.729 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:51.729 Build type: native build 00:01:51.729 Program cat found: YES (/usr/bin/cat) 00:01:51.729 Project name: DPDK 00:01:51.729 Project version: 24.07.0 00:01:51.729 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:51.729 C linker for the host machine: cc ld.bfd 2.40-14 00:01:51.729 Host machine cpu family: x86_64 00:01:51.729 Host machine cpu: x86_64 00:01:51.729 Message: ## Building in Developer Mode ## 00:01:51.729 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.729 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.729 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.729 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:51.729 Program cat found: YES (/usr/bin/cat) 00:01:51.729 Compiler for C supports arguments -march=native: YES 00:01:51.729 Checking for size of "void *" : 8 00:01:51.729 Checking for size of "void *" : 8 (cached) 00:01:51.729 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:51.729 Library m found: YES 00:01:51.729 Library numa found: YES 00:01:51.729 Has header "numaif.h" : YES 00:01:51.729 Library fdt found: NO 00:01:51.729 Library execinfo found: NO 00:01:51.729 Has header "execinfo.h" : YES 00:01:51.729 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:51.729 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.729 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.729 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.729 Run-time dependency openssl found: YES 3.1.1 00:01:51.729 Run-time dependency libpcap found: YES 1.10.4 00:01:51.729 Has header "pcap.h" with dependency libpcap: YES 00:01:51.729 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.729 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.729 Compiler for C supports arguments -Wformat: YES 00:01:51.729 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.729 Compiler for C supports arguments -Wformat-security: NO 00:01:51.729 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.729 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.729 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.729 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.729 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.729 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.729 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.729 Compiler for C supports arguments -Wundef: YES 00:01:51.729 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.729 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.729 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.729 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.729 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.729 Program objdump found: YES (/usr/bin/objdump) 00:01:51.729 Compiler for C supports arguments -mavx512f: YES 00:01:51.729 Checking if "AVX512 checking" compiles: YES 00:01:51.729 Fetching value of define "__SSE4_2__" : 1 00:01:51.729 Fetching value of define "__AES__" : 1 00:01:51.729 Fetching value of define "__AVX__" : 1 00:01:51.729 Fetching value of define "__AVX2__" : 1 00:01:51.729 Fetching value of define "__AVX512BW__" : 1 00:01:51.729 Fetching value of define "__AVX512CD__" : 1 00:01:51.729 Fetching value of define "__AVX512DQ__" : 1 00:01:51.729 Fetching value of define "__AVX512F__" : 1 00:01:51.729 Fetching value of define "__AVX512VL__" : 1 00:01:51.729 Fetching value of define "__PCLMUL__" : 1 00:01:51.729 Fetching value of define "__RDRND__" : 1 00:01:51.729 Fetching value of define "__RDSEED__" : 1 00:01:51.729 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:51.729 Fetching value of define "__znver1__" : (undefined) 00:01:51.729 Fetching value of define "__znver2__" : (undefined) 00:01:51.729 Fetching value of define "__znver3__" : (undefined) 00:01:51.729 Fetching value of define "__znver4__" : (undefined) 00:01:51.729 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.729 Message: lib/log: Defining dependency "log" 00:01:51.729 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.729 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.729 Checking for function "getentropy" : NO 00:01:51.729 Message: lib/eal: Defining dependency "eal" 00:01:51.729 Message: lib/ring: Defining dependency "ring" 00:01:51.729 Message: lib/rcu: Defining dependency "rcu" 00:01:51.729 Message: lib/mempool: Defining dependency "mempool" 00:01:51.729 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.729 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.729 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.729 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.729 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.729 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.729 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:51.729 Compiler for C supports arguments -mpclmul: YES 00:01:51.729 Compiler for C supports arguments -maes: YES 00:01:51.729 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.729 Compiler for C supports arguments -mavx512bw: YES 00:01:51.729 Compiler for C supports arguments -mavx512dq: YES 00:01:51.729 Compiler for C supports arguments -mavx512vl: YES 00:01:51.729 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.729 Compiler for C supports arguments -mavx2: YES 00:01:51.729 Compiler for C supports arguments -mavx: YES 00:01:51.729 Message: lib/net: Defining dependency "net" 00:01:51.729 Message: lib/meter: Defining dependency "meter" 00:01:51.729 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.729 Message: lib/pci: Defining dependency "pci" 00:01:51.729 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.729 Message: lib/hash: Defining dependency "hash" 00:01:51.729 Message: lib/timer: Defining dependency "timer" 00:01:51.729 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.729 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.729 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.729 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.729 Message: lib/power: Defining dependency "power" 00:01:51.729 Message: lib/reorder: Defining dependency "reorder" 00:01:51.729 Message: lib/security: Defining dependency "security" 00:01:51.729 Has header "linux/userfaultfd.h" : YES 00:01:51.729 Has header "linux/vduse.h" : YES 00:01:51.729 Message: lib/vhost: Defining dependency "vhost" 00:01:51.729 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.729 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.729 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.729 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.729 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.729 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.729 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.729 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.729 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.729 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.729 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:51.729 Configuring doxy-api-html.conf using configuration 00:01:51.729 Configuring doxy-api-man.conf using configuration 00:01:51.729 Program mandb found: YES (/usr/bin/mandb) 00:01:51.729 Program sphinx-build found: NO 00:01:51.729 Configuring rte_build_config.h using configuration 00:01:51.729 Message: 00:01:51.729 ================= 00:01:51.729 Applications Enabled 00:01:51.729 ================= 00:01:51.729 00:01:51.729 apps: 00:01:51.729 00:01:51.729 00:01:51.729 Message: 00:01:51.729 ================= 00:01:51.729 Libraries Enabled 00:01:51.729 ================= 00:01:51.729 00:01:51.729 libs: 00:01:51.729 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.729 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.729 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.729 00:01:51.729 Message: 00:01:51.729 =============== 00:01:51.729 Drivers Enabled 00:01:51.729 =============== 00:01:51.729 00:01:51.729 common: 00:01:51.729 00:01:51.729 bus: 00:01:51.729 pci, vdev, 00:01:51.729 mempool: 00:01:51.729 ring, 00:01:51.729 dma: 00:01:51.729 00:01:51.729 net: 00:01:51.729 00:01:51.729 crypto: 00:01:51.729 00:01:51.729 compress: 00:01:51.729 00:01:51.729 vdpa: 00:01:51.729 00:01:51.729 00:01:51.730 Message: 00:01:51.730 ================= 00:01:51.730 Content Skipped 00:01:51.730 ================= 00:01:51.730 00:01:51.730 apps: 00:01:51.730 dumpcap: explicitly disabled via build config 00:01:51.730 graph: explicitly disabled via build config 00:01:51.730 pdump: explicitly disabled via build config 00:01:51.730 proc-info: explicitly disabled via build config 00:01:51.730 test-acl: explicitly disabled via build config 00:01:51.730 test-bbdev: explicitly disabled via build config 00:01:51.730 test-cmdline: explicitly disabled via build config 00:01:51.730 test-compress-perf: explicitly disabled via build config 00:01:51.730 test-crypto-perf: explicitly disabled via build config 00:01:51.730 test-dma-perf: explicitly disabled via build config 00:01:51.730 test-eventdev: explicitly disabled via build config 00:01:51.730 test-fib: explicitly disabled via build config 00:01:51.730 test-flow-perf: explicitly disabled via build config 00:01:51.730 test-gpudev: explicitly disabled via build config 00:01:51.730 test-mldev: explicitly disabled via build config 00:01:51.730 test-pipeline: explicitly disabled via build config 00:01:51.730 test-pmd: explicitly disabled via build config 00:01:51.730 test-regex: explicitly disabled via build config 00:01:51.730 test-sad: explicitly disabled via build config 00:01:51.730 test-security-perf: explicitly disabled via build config 00:01:51.730 00:01:51.730 libs: 00:01:51.730 argparse: explicitly disabled via build config 00:01:51.730 ptr_compress: explicitly disabled via build config 00:01:51.730 metrics: explicitly disabled via build config 00:01:51.730 acl: explicitly disabled via build config 00:01:51.730 bbdev: explicitly disabled via build config 00:01:51.730 bitratestats: explicitly disabled via build config 00:01:51.730 bpf: explicitly disabled via build config 00:01:51.730 cfgfile: explicitly disabled via build config 00:01:51.730 distributor: explicitly disabled via build config 00:01:51.730 efd: explicitly disabled via build config 00:01:51.730 eventdev: explicitly disabled via build config 00:01:51.730 dispatcher: explicitly disabled via build config 00:01:51.730 gpudev: explicitly disabled via build config 00:01:51.730 gro: explicitly disabled via build config 00:01:51.730 gso: explicitly disabled via build config 00:01:51.730 ip_frag: explicitly disabled via build config 00:01:51.730 jobstats: explicitly disabled via build config 00:01:51.730 latencystats: explicitly disabled via build config 00:01:51.730 lpm: explicitly disabled via build config 00:01:51.730 member: explicitly disabled via build config 00:01:51.730 pcapng: explicitly disabled via build config 00:01:51.730 rawdev: explicitly disabled via build config 00:01:51.730 regexdev: explicitly disabled via build config 00:01:51.730 mldev: explicitly disabled via build config 00:01:51.730 rib: explicitly disabled via build config 00:01:51.730 sched: explicitly disabled via build config 00:01:51.730 stack: explicitly disabled via build config 00:01:51.730 ipsec: explicitly disabled via build config 00:01:51.730 pdcp: explicitly disabled via build config 00:01:51.730 fib: explicitly disabled via build config 00:01:51.730 port: explicitly disabled via build config 00:01:51.730 pdump: explicitly disabled via build config 00:01:51.730 table: explicitly disabled via build config 00:01:51.730 pipeline: explicitly disabled via build config 00:01:51.730 graph: explicitly disabled via build config 00:01:51.730 node: explicitly disabled via build config 00:01:51.730 00:01:51.730 drivers: 00:01:51.730 common/cpt: not in enabled drivers build config 00:01:51.730 common/dpaax: not in enabled drivers build config 00:01:51.730 common/iavf: not in enabled drivers build config 00:01:51.730 common/idpf: not in enabled drivers build config 00:01:51.730 common/ionic: not in enabled drivers build config 00:01:51.730 common/mvep: not in enabled drivers build config 00:01:51.730 common/octeontx: not in enabled drivers build config 00:01:51.730 bus/auxiliary: not in enabled drivers build config 00:01:51.730 bus/cdx: not in enabled drivers build config 00:01:51.730 bus/dpaa: not in enabled drivers build config 00:01:51.730 bus/fslmc: not in enabled drivers build config 00:01:51.730 bus/ifpga: not in enabled drivers build config 00:01:51.730 bus/platform: not in enabled drivers build config 00:01:51.730 bus/uacce: not in enabled drivers build config 00:01:51.730 bus/vmbus: not in enabled drivers build config 00:01:51.730 common/cnxk: not in enabled drivers build config 00:01:51.730 common/mlx5: not in enabled drivers build config 00:01:51.730 common/nfp: not in enabled drivers build config 00:01:51.730 common/nitrox: not in enabled drivers build config 00:01:51.730 common/qat: not in enabled drivers build config 00:01:51.730 common/sfc_efx: not in enabled drivers build config 00:01:51.730 mempool/bucket: not in enabled drivers build config 00:01:51.730 mempool/cnxk: not in enabled drivers build config 00:01:51.730 mempool/dpaa: not in enabled drivers build config 00:01:51.730 mempool/dpaa2: not in enabled drivers build config 00:01:51.730 mempool/octeontx: not in enabled drivers build config 00:01:51.730 mempool/stack: not in enabled drivers build config 00:01:51.730 dma/cnxk: not in enabled drivers build config 00:01:51.730 dma/dpaa: not in enabled drivers build config 00:01:51.730 dma/dpaa2: not in enabled drivers build config 00:01:51.730 dma/hisilicon: not in enabled drivers build config 00:01:51.730 dma/idxd: not in enabled drivers build config 00:01:51.730 dma/ioat: not in enabled drivers build config 00:01:51.730 dma/odm: not in enabled drivers build config 00:01:51.730 dma/skeleton: not in enabled drivers build config 00:01:51.730 net/af_packet: not in enabled drivers build config 00:01:51.730 net/af_xdp: not in enabled drivers build config 00:01:51.730 net/ark: not in enabled drivers build config 00:01:51.730 net/atlantic: not in enabled drivers build config 00:01:51.730 net/avp: not in enabled drivers build config 00:01:51.730 net/axgbe: not in enabled drivers build config 00:01:51.730 net/bnx2x: not in enabled drivers build config 00:01:51.730 net/bnxt: not in enabled drivers build config 00:01:51.730 net/bonding: not in enabled drivers build config 00:01:51.730 net/cnxk: not in enabled drivers build config 00:01:51.730 net/cpfl: not in enabled drivers build config 00:01:51.730 net/cxgbe: not in enabled drivers build config 00:01:51.730 net/dpaa: not in enabled drivers build config 00:01:51.730 net/dpaa2: not in enabled drivers build config 00:01:51.730 net/e1000: not in enabled drivers build config 00:01:51.730 net/ena: not in enabled drivers build config 00:01:51.730 net/enetc: not in enabled drivers build config 00:01:51.730 net/enetfec: not in enabled drivers build config 00:01:51.730 net/enic: not in enabled drivers build config 00:01:51.730 net/failsafe: not in enabled drivers build config 00:01:51.730 net/fm10k: not in enabled drivers build config 00:01:51.730 net/gve: not in enabled drivers build config 00:01:51.730 net/hinic: not in enabled drivers build config 00:01:51.730 net/hns3: not in enabled drivers build config 00:01:51.730 net/i40e: not in enabled drivers build config 00:01:51.730 net/iavf: not in enabled drivers build config 00:01:51.730 net/ice: not in enabled drivers build config 00:01:51.730 net/idpf: not in enabled drivers build config 00:01:51.730 net/igc: not in enabled drivers build config 00:01:51.730 net/ionic: not in enabled drivers build config 00:01:51.730 net/ipn3ke: not in enabled drivers build config 00:01:51.730 net/ixgbe: not in enabled drivers build config 00:01:51.730 net/mana: not in enabled drivers build config 00:01:51.730 net/memif: not in enabled drivers build config 00:01:51.730 net/mlx4: not in enabled drivers build config 00:01:51.730 net/mlx5: not in enabled drivers build config 00:01:51.730 net/mvneta: not in enabled drivers build config 00:01:51.730 net/mvpp2: not in enabled drivers build config 00:01:51.730 net/netvsc: not in enabled drivers build config 00:01:51.730 net/nfb: not in enabled drivers build config 00:01:51.730 net/nfp: not in enabled drivers build config 00:01:51.730 net/ngbe: not in enabled drivers build config 00:01:51.730 net/ntnic: not in enabled drivers build config 00:01:51.730 net/null: not in enabled drivers build config 00:01:51.730 net/octeontx: not in enabled drivers build config 00:01:51.730 net/octeon_ep: not in enabled drivers build config 00:01:51.730 net/pcap: not in enabled drivers build config 00:01:51.730 net/pfe: not in enabled drivers build config 00:01:51.730 net/qede: not in enabled drivers build config 00:01:51.730 net/ring: not in enabled drivers build config 00:01:51.730 net/sfc: not in enabled drivers build config 00:01:51.730 net/softnic: not in enabled drivers build config 00:01:51.730 net/tap: not in enabled drivers build config 00:01:51.730 net/thunderx: not in enabled drivers build config 00:01:51.730 net/txgbe: not in enabled drivers build config 00:01:51.730 net/vdev_netvsc: not in enabled drivers build config 00:01:51.730 net/vhost: not in enabled drivers build config 00:01:51.730 net/virtio: not in enabled drivers build config 00:01:51.730 net/vmxnet3: not in enabled drivers build config 00:01:51.730 raw/*: missing internal dependency, "rawdev" 00:01:51.730 crypto/armv8: not in enabled drivers build config 00:01:51.730 crypto/bcmfs: not in enabled drivers build config 00:01:51.730 crypto/caam_jr: not in enabled drivers build config 00:01:51.730 crypto/ccp: not in enabled drivers build config 00:01:51.730 crypto/cnxk: not in enabled drivers build config 00:01:51.730 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.730 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.730 crypto/ionic: not in enabled drivers build config 00:01:51.730 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.730 crypto/mlx5: not in enabled drivers build config 00:01:51.730 crypto/mvsam: not in enabled drivers build config 00:01:51.730 crypto/nitrox: not in enabled drivers build config 00:01:51.730 crypto/null: not in enabled drivers build config 00:01:51.730 crypto/octeontx: not in enabled drivers build config 00:01:51.730 crypto/openssl: not in enabled drivers build config 00:01:51.730 crypto/scheduler: not in enabled drivers build config 00:01:51.730 crypto/uadk: not in enabled drivers build config 00:01:51.730 crypto/virtio: not in enabled drivers build config 00:01:51.730 compress/isal: not in enabled drivers build config 00:01:51.730 compress/mlx5: not in enabled drivers build config 00:01:51.730 compress/nitrox: not in enabled drivers build config 00:01:51.730 compress/octeontx: not in enabled drivers build config 00:01:51.730 compress/uadk: not in enabled drivers build config 00:01:51.730 compress/zlib: not in enabled drivers build config 00:01:51.730 regex/*: missing internal dependency, "regexdev" 00:01:51.730 ml/*: missing internal dependency, "mldev" 00:01:51.730 vdpa/ifc: not in enabled drivers build config 00:01:51.730 vdpa/mlx5: not in enabled drivers build config 00:01:51.730 vdpa/nfp: not in enabled drivers build config 00:01:51.730 vdpa/sfc: not in enabled drivers build config 00:01:51.730 event/*: missing internal dependency, "eventdev" 00:01:51.730 baseband/*: missing internal dependency, "bbdev" 00:01:51.730 gpu/*: missing internal dependency, "gpudev" 00:01:51.730 00:01:51.730 00:01:51.730 Build targets in project: 84 00:01:51.730 00:01:51.730 DPDK 24.07.0 00:01:51.730 00:01:51.730 User defined options 00:01:51.730 buildtype : debug 00:01:51.731 default_library : shared 00:01:51.731 libdir : lib 00:01:51.731 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.731 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.731 c_link_args : 00:01:51.731 cpu_instruction_set: native 00:01:51.731 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:51.731 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump,ptr_compress 00:01:51.731 enable_docs : false 00:01:51.731 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.731 enable_kmods : false 00:01:51.731 max_lcores : 128 00:01:51.731 tests : false 00:01:51.731 00:01:51.731 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.731 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.731 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.731 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.731 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.731 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.731 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.731 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.731 [7/268] Linking static target lib/librte_kvargs.a 00:01:51.731 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.731 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.731 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.731 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.731 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.731 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.731 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.731 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.731 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.731 [17/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.731 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.731 [19/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.731 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.731 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.731 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.731 [23/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.731 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.731 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.731 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.731 [27/268] Linking static target lib/librte_log.a 00:01:51.731 [28/268] Linking static target lib/librte_pci.a 00:01:51.731 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.731 [30/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.731 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.731 [32/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.731 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.731 [34/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.989 [35/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:51.989 [36/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.989 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.989 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.989 [39/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.989 [40/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.989 [41/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.989 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.989 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.989 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.989 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.989 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.989 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.989 [48/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.989 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.989 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.989 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:51.989 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.989 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.989 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.989 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.989 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.989 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.989 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.989 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.989 [60/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:51.989 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.990 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.990 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.252 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.252 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.252 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.252 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.252 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.252 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.252 [70/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.252 [71/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.252 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.252 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.252 [74/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:52.252 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.252 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.252 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.252 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.252 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.252 [80/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.252 [81/268] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:52.252 [82/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.252 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.252 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.252 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.252 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.252 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.252 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.252 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.252 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.252 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.252 [92/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.252 [93/268] Linking static target lib/librte_meter.a 00:01:52.252 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.252 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.252 [96/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.252 [97/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.252 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.252 [99/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.252 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.252 [101/268] Linking static target lib/librte_telemetry.a 00:01:52.252 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.252 [103/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.252 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.252 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.252 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.252 [107/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.252 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.252 [109/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.252 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.252 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.252 [112/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.252 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.252 [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.252 [115/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.252 [116/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.252 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.252 [118/268] Linking static target lib/librte_ring.a 00:01:52.252 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.252 [120/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.252 [121/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.252 [122/268] Linking static target lib/librte_cmdline.a 00:01:52.252 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.252 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.252 [125/268] Linking static target lib/librte_timer.a 00:01:52.252 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.252 [127/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.252 [128/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.252 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.252 [130/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.252 [131/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.252 [132/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.252 [133/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.252 [134/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.252 [135/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.252 [136/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.252 [137/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.252 [138/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.252 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.252 [140/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.252 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.252 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.252 [143/268] Linking static target lib/librte_compressdev.a 00:01:52.252 [144/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.252 [145/268] Linking static target lib/librte_dmadev.a 00:01:52.252 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.252 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.252 [148/268] Linking static target lib/librte_net.a 00:01:52.252 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.252 [150/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.252 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.252 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.252 [153/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.252 [154/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.252 [155/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.252 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.252 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.252 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.252 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.252 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.252 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.252 [162/268] Linking static target lib/librte_mempool.a 00:01:52.252 [163/268] Linking static target lib/librte_rcu.a 00:01:52.252 [164/268] Linking static target lib/librte_eal.a 00:01:52.252 [165/268] Linking static target lib/librte_power.a 00:01:52.252 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.252 [167/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.252 [168/268] Linking target lib/librte_log.so.24.2 00:01:52.252 [169/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.252 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.252 [171/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.252 [172/268] Linking static target lib/librte_reorder.a 00:01:52.252 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.252 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.252 [175/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.514 [176/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.514 [177/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:52.514 [178/268] Linking static target lib/librte_security.a 00:01:52.514 [179/268] Linking static target lib/librte_mbuf.a 00:01:52.514 [180/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.514 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.514 [182/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.514 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.514 [184/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.514 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.514 [186/268] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.514 [187/268] Linking static target drivers/librte_bus_vdev.a 00:01:52.514 [188/268] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:52.514 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.514 [190/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.514 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.514 [192/268] Linking static target lib/librte_hash.a 00:01:52.514 [193/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.514 [194/268] Linking target lib/librte_kvargs.so.24.2 00:01:52.514 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.514 [196/268] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.514 [197/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.514 [198/268] Linking static target drivers/librte_mempool_ring.a 00:01:52.514 [199/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.514 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.776 [201/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.776 [202/268] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.776 [203/268] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:52.776 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.776 [205/268] Linking static target drivers/librte_bus_pci.a 00:01:52.776 [206/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.776 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.776 [208/268] Linking static target lib/librte_cryptodev.a 00:01:52.776 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.776 [210/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.776 [211/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.776 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.776 [213/268] Linking target lib/librte_telemetry.so.24.2 00:01:52.776 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.037 [215/268] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:53.037 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.037 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.037 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.037 [219/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.298 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.298 [221/268] Linking static target lib/librte_ethdev.a 00:01:53.298 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.298 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.561 [224/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.561 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.561 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.561 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.822 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:53.822 [229/268] Linking static target lib/librte_vhost.a 00:01:55.208 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.791 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.934 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.934 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.934 [234/268] Linking target lib/librte_eal.so.24.2 00:02:04.195 [235/268] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:04.195 [236/268] Linking target lib/librte_ring.so.24.2 00:02:04.195 [237/268] Linking target lib/librte_meter.so.24.2 00:02:04.195 [238/268] Linking target lib/librte_timer.so.24.2 00:02:04.195 [239/268] Linking target lib/librte_pci.so.24.2 00:02:04.195 [240/268] Linking target lib/librte_dmadev.so.24.2 00:02:04.195 [241/268] Linking target drivers/librte_bus_vdev.so.24.2 00:02:04.195 [242/268] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:04.455 [243/268] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:04.455 [244/268] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:04.455 [245/268] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:04.455 [246/268] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:04.455 [247/268] Linking target drivers/librte_bus_pci.so.24.2 00:02:04.455 [248/268] Linking target lib/librte_rcu.so.24.2 00:02:04.455 [249/268] Linking target lib/librte_mempool.so.24.2 00:02:04.455 [250/268] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:04.455 [251/268] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:04.716 [252/268] Linking target drivers/librte_mempool_ring.so.24.2 00:02:04.716 [253/268] Linking target lib/librte_mbuf.so.24.2 00:02:04.716 [254/268] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:04.716 [255/268] Linking target lib/librte_compressdev.so.24.2 00:02:04.716 [256/268] Linking target lib/librte_reorder.so.24.2 00:02:04.716 [257/268] Linking target lib/librte_net.so.24.2 00:02:04.716 [258/268] Linking target lib/librte_cryptodev.so.24.2 00:02:04.976 [259/268] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:04.976 [260/268] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:04.977 [261/268] Linking target lib/librte_hash.so.24.2 00:02:04.977 [262/268] Linking target lib/librte_cmdline.so.24.2 00:02:04.977 [263/268] Linking target lib/librte_security.so.24.2 00:02:04.977 [264/268] Linking target lib/librte_ethdev.so.24.2 00:02:04.977 [265/268] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:05.237 [266/268] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:05.237 [267/268] Linking target lib/librte_power.so.24.2 00:02:05.237 [268/268] Linking target lib/librte_vhost.so.24.2 00:02:05.237 INFO: autodetecting backend as ninja 00:02:05.237 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:08.540 CC lib/log/log.o 00:02:08.540 CC lib/log/log_flags.o 00:02:08.540 CC lib/log/log_deprecated.o 00:02:08.540 CC lib/ut/ut.o 00:02:08.540 CC lib/ut_mock/mock.o 00:02:08.802 LIB libspdk_log.a 00:02:08.802 LIB libspdk_ut_mock.a 00:02:08.802 LIB libspdk_ut.a 00:02:08.802 SO libspdk_ut_mock.so.6.0 00:02:08.802 SO libspdk_log.so.7.1 00:02:08.802 SO libspdk_ut.so.2.0 00:02:08.802 SYMLINK libspdk_ut_mock.so 00:02:08.802 SYMLINK libspdk_ut.so 00:02:08.802 SYMLINK libspdk_log.so 00:02:09.064 CC lib/ioat/ioat.o 00:02:09.325 CC lib/util/base64.o 00:02:09.325 CC lib/util/bit_array.o 00:02:09.325 CC lib/util/cpuset.o 00:02:09.325 CC lib/util/crc16.o 00:02:09.325 CC lib/util/crc32.o 00:02:09.325 CC lib/util/crc32c.o 00:02:09.325 CC lib/dma/dma.o 00:02:09.325 CXX lib/trace_parser/trace.o 00:02:09.325 CC lib/util/crc32_ieee.o 00:02:09.325 CC lib/util/crc64.o 00:02:09.325 CC lib/util/dif.o 00:02:09.325 CC lib/util/fd.o 00:02:09.325 CC lib/util/fd_group.o 00:02:09.325 CC lib/util/file.o 00:02:09.325 CC lib/util/hexlify.o 00:02:09.325 CC lib/util/math.o 00:02:09.325 CC lib/util/iov.o 00:02:09.325 CC lib/util/net.o 00:02:09.325 CC lib/util/pipe.o 00:02:09.325 CC lib/util/strerror_tls.o 00:02:09.325 CC lib/util/string.o 00:02:09.325 CC lib/util/uuid.o 00:02:09.325 CC lib/util/xor.o 00:02:09.325 CC lib/util/zipf.o 00:02:09.325 CC lib/util/md5.o 00:02:09.325 CC lib/vfio_user/host/vfio_user_pci.o 00:02:09.325 CC lib/vfio_user/host/vfio_user.o 00:02:09.325 LIB libspdk_dma.a 00:02:09.325 SO libspdk_dma.so.5.0 00:02:09.586 LIB libspdk_ioat.a 00:02:09.586 SYMLINK libspdk_dma.so 00:02:09.586 SO libspdk_ioat.so.7.0 00:02:09.586 SYMLINK libspdk_ioat.so 00:02:09.586 LIB libspdk_vfio_user.a 00:02:09.586 SO libspdk_vfio_user.so.5.0 00:02:09.586 LIB libspdk_util.a 00:02:09.848 SYMLINK libspdk_vfio_user.so 00:02:09.848 SO libspdk_util.so.10.0 00:02:09.848 SYMLINK libspdk_util.so 00:02:10.108 LIB libspdk_trace_parser.a 00:02:10.108 SO libspdk_trace_parser.so.6.0 00:02:10.108 SYMLINK libspdk_trace_parser.so 00:02:10.368 CC lib/json/json_parse.o 00:02:10.368 CC lib/json/json_util.o 00:02:10.368 CC lib/conf/conf.o 00:02:10.368 CC lib/json/json_write.o 00:02:10.368 CC lib/rdma_provider/common.o 00:02:10.368 CC lib/rdma_utils/rdma_utils.o 00:02:10.368 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:10.368 CC lib/idxd/idxd.o 00:02:10.368 CC lib/vmd/vmd.o 00:02:10.368 CC lib/idxd/idxd_user.o 00:02:10.368 CC lib/env_dpdk/env.o 00:02:10.368 CC lib/vmd/led.o 00:02:10.368 CC lib/idxd/idxd_kernel.o 00:02:10.368 CC lib/env_dpdk/memory.o 00:02:10.368 CC lib/env_dpdk/pci.o 00:02:10.368 CC lib/env_dpdk/init.o 00:02:10.368 CC lib/env_dpdk/threads.o 00:02:10.368 CC lib/env_dpdk/pci_ioat.o 00:02:10.368 CC lib/env_dpdk/pci_virtio.o 00:02:10.368 CC lib/env_dpdk/pci_vmd.o 00:02:10.368 CC lib/env_dpdk/pci_idxd.o 00:02:10.368 CC lib/env_dpdk/pci_event.o 00:02:10.369 CC lib/env_dpdk/sigbus_handler.o 00:02:10.369 CC lib/env_dpdk/pci_dpdk.o 00:02:10.369 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:10.369 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:10.629 LIB libspdk_rdma_provider.a 00:02:10.629 LIB libspdk_conf.a 00:02:10.629 SO libspdk_rdma_provider.so.6.0 00:02:10.629 SO libspdk_conf.so.6.0 00:02:10.629 LIB libspdk_rdma_utils.a 00:02:10.629 LIB libspdk_json.a 00:02:10.629 SYMLINK libspdk_rdma_provider.so 00:02:10.629 SO libspdk_rdma_utils.so.1.0 00:02:10.629 SYMLINK libspdk_conf.so 00:02:10.629 SO libspdk_json.so.6.0 00:02:10.629 SYMLINK libspdk_rdma_utils.so 00:02:10.629 SYMLINK libspdk_json.so 00:02:10.890 LIB libspdk_idxd.a 00:02:10.890 SO libspdk_idxd.so.12.1 00:02:10.890 LIB libspdk_vmd.a 00:02:10.890 SO libspdk_vmd.so.6.0 00:02:10.890 SYMLINK libspdk_idxd.so 00:02:10.890 SYMLINK libspdk_vmd.so 00:02:11.152 CC lib/jsonrpc/jsonrpc_server.o 00:02:11.152 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:11.152 CC lib/jsonrpc/jsonrpc_client.o 00:02:11.152 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:11.412 LIB libspdk_jsonrpc.a 00:02:11.413 SO libspdk_jsonrpc.so.6.0 00:02:11.413 SYMLINK libspdk_jsonrpc.so 00:02:11.413 LIB libspdk_env_dpdk.a 00:02:11.673 SO libspdk_env_dpdk.so.15.0 00:02:11.673 SYMLINK libspdk_env_dpdk.so 00:02:11.673 CC lib/rpc/rpc.o 00:02:11.934 LIB libspdk_rpc.a 00:02:11.934 SO libspdk_rpc.so.6.0 00:02:12.195 SYMLINK libspdk_rpc.so 00:02:12.457 CC lib/notify/notify.o 00:02:12.457 CC lib/notify/notify_rpc.o 00:02:12.457 CC lib/keyring/keyring.o 00:02:12.457 CC lib/keyring/keyring_rpc.o 00:02:12.457 CC lib/trace/trace.o 00:02:12.457 CC lib/trace/trace_flags.o 00:02:12.457 CC lib/trace/trace_rpc.o 00:02:12.719 LIB libspdk_notify.a 00:02:12.719 SO libspdk_notify.so.6.0 00:02:12.719 LIB libspdk_keyring.a 00:02:12.719 LIB libspdk_trace.a 00:02:12.719 SO libspdk_keyring.so.2.0 00:02:12.719 SYMLINK libspdk_notify.so 00:02:12.719 SO libspdk_trace.so.11.0 00:02:12.719 SYMLINK libspdk_keyring.so 00:02:12.981 SYMLINK libspdk_trace.so 00:02:13.242 CC lib/thread/thread.o 00:02:13.242 CC lib/thread/iobuf.o 00:02:13.242 CC lib/sock/sock.o 00:02:13.242 CC lib/sock/sock_rpc.o 00:02:13.503 LIB libspdk_sock.a 00:02:13.764 SO libspdk_sock.so.10.0 00:02:13.764 SYMLINK libspdk_sock.so 00:02:14.025 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.025 CC lib/nvme/nvme_ctrlr.o 00:02:14.025 CC lib/nvme/nvme_fabric.o 00:02:14.025 CC lib/nvme/nvme_ns_cmd.o 00:02:14.025 CC lib/nvme/nvme_ns.o 00:02:14.025 CC lib/nvme/nvme_pcie_common.o 00:02:14.025 CC lib/nvme/nvme_pcie.o 00:02:14.025 CC lib/nvme/nvme_qpair.o 00:02:14.025 CC lib/nvme/nvme.o 00:02:14.025 CC lib/nvme/nvme_quirks.o 00:02:14.025 CC lib/nvme/nvme_transport.o 00:02:14.025 CC lib/nvme/nvme_discovery.o 00:02:14.025 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.025 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.025 CC lib/nvme/nvme_tcp.o 00:02:14.025 CC lib/nvme/nvme_opal.o 00:02:14.025 CC lib/nvme/nvme_io_msg.o 00:02:14.025 CC lib/nvme/nvme_poll_group.o 00:02:14.025 CC lib/nvme/nvme_zns.o 00:02:14.025 CC lib/nvme/nvme_stubs.o 00:02:14.025 CC lib/nvme/nvme_auth.o 00:02:14.025 CC lib/nvme/nvme_cuse.o 00:02:14.025 CC lib/nvme/nvme_vfio_user.o 00:02:14.025 CC lib/nvme/nvme_rdma.o 00:02:14.598 LIB libspdk_thread.a 00:02:14.598 SO libspdk_thread.so.10.2 00:02:14.598 SYMLINK libspdk_thread.so 00:02:14.859 CC lib/vfu_tgt/tgt_endpoint.o 00:02:14.859 CC lib/vfu_tgt/tgt_rpc.o 00:02:14.859 CC lib/accel/accel.o 00:02:14.859 CC lib/accel/accel_rpc.o 00:02:14.859 CC lib/accel/accel_sw.o 00:02:14.859 CC lib/virtio/virtio.o 00:02:14.859 CC lib/init/json_config.o 00:02:14.859 CC lib/virtio/virtio_vhost_user.o 00:02:14.859 CC lib/virtio/virtio_vfio_user.o 00:02:14.859 CC lib/init/subsystem.o 00:02:14.859 CC lib/virtio/virtio_pci.o 00:02:14.859 CC lib/init/subsystem_rpc.o 00:02:14.859 CC lib/init/rpc.o 00:02:15.119 CC lib/blob/blobstore.o 00:02:15.119 CC lib/blob/request.o 00:02:15.119 CC lib/blob/zeroes.o 00:02:15.119 CC lib/blob/blob_bs_dev.o 00:02:15.119 CC lib/fsdev/fsdev.o 00:02:15.119 CC lib/fsdev/fsdev_io.o 00:02:15.119 CC lib/fsdev/fsdev_rpc.o 00:02:15.119 LIB libspdk_init.a 00:02:15.380 SO libspdk_init.so.6.0 00:02:15.380 LIB libspdk_vfu_tgt.a 00:02:15.380 LIB libspdk_virtio.a 00:02:15.380 SO libspdk_vfu_tgt.so.3.0 00:02:15.380 SYMLINK libspdk_init.so 00:02:15.380 SO libspdk_virtio.so.7.0 00:02:15.380 SYMLINK libspdk_vfu_tgt.so 00:02:15.380 SYMLINK libspdk_virtio.so 00:02:15.641 LIB libspdk_fsdev.a 00:02:15.641 SO libspdk_fsdev.so.1.0 00:02:15.641 CC lib/event/app.o 00:02:15.641 CC lib/event/reactor.o 00:02:15.641 CC lib/event/log_rpc.o 00:02:15.641 CC lib/event/app_rpc.o 00:02:15.641 CC lib/event/scheduler_static.o 00:02:15.641 SYMLINK libspdk_fsdev.so 00:02:15.903 LIB libspdk_accel.a 00:02:15.903 SO libspdk_accel.so.16.0 00:02:16.164 LIB libspdk_nvme.a 00:02:16.164 SYMLINK libspdk_accel.so 00:02:16.164 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:16.164 LIB libspdk_event.a 00:02:16.164 SO libspdk_nvme.so.14.0 00:02:16.164 SO libspdk_event.so.15.0 00:02:16.164 SYMLINK libspdk_event.so 00:02:16.425 CC lib/bdev/bdev.o 00:02:16.425 CC lib/bdev/bdev_rpc.o 00:02:16.425 CC lib/bdev/bdev_zone.o 00:02:16.425 CC lib/bdev/part.o 00:02:16.425 CC lib/bdev/scsi_nvme.o 00:02:16.425 SYMLINK libspdk_nvme.so 00:02:16.686 LIB libspdk_fuse_dispatcher.a 00:02:16.686 SO libspdk_fuse_dispatcher.so.1.0 00:02:16.947 SYMLINK libspdk_fuse_dispatcher.so 00:02:17.519 LIB libspdk_blob.a 00:02:17.780 SO libspdk_blob.so.11.0 00:02:17.780 SYMLINK libspdk_blob.so 00:02:18.042 CC lib/blobfs/blobfs.o 00:02:18.042 CC lib/lvol/lvol.o 00:02:18.042 CC lib/blobfs/tree.o 00:02:18.987 LIB libspdk_bdev.a 00:02:18.987 SO libspdk_bdev.so.17.0 00:02:18.987 LIB libspdk_blobfs.a 00:02:18.987 SO libspdk_blobfs.so.10.0 00:02:18.987 SYMLINK libspdk_bdev.so 00:02:18.987 LIB libspdk_lvol.a 00:02:18.987 SYMLINK libspdk_blobfs.so 00:02:18.987 SO libspdk_lvol.so.10.0 00:02:18.987 SYMLINK libspdk_lvol.so 00:02:19.247 CC lib/nvmf/ctrlr.o 00:02:19.247 CC lib/nvmf/ctrlr_discovery.o 00:02:19.247 CC lib/scsi/dev.o 00:02:19.247 CC lib/nvmf/ctrlr_bdev.o 00:02:19.247 CC lib/nvmf/subsystem.o 00:02:19.247 CC lib/scsi/lun.o 00:02:19.247 CC lib/scsi/port.o 00:02:19.247 CC lib/nvmf/nvmf.o 00:02:19.247 CC lib/nvmf/transport.o 00:02:19.247 CC lib/scsi/scsi.o 00:02:19.247 CC lib/nvmf/nvmf_rpc.o 00:02:19.247 CC lib/scsi/scsi_bdev.o 00:02:19.247 CC lib/nvmf/tcp.o 00:02:19.247 CC lib/scsi/scsi_pr.o 00:02:19.247 CC lib/nvmf/stubs.o 00:02:19.247 CC lib/scsi/task.o 00:02:19.247 CC lib/scsi/scsi_rpc.o 00:02:19.247 CC lib/nvmf/mdns_server.o 00:02:19.247 CC lib/nbd/nbd.o 00:02:19.247 CC lib/nvmf/vfio_user.o 00:02:19.247 CC lib/ublk/ublk.o 00:02:19.247 CC lib/nvmf/auth.o 00:02:19.247 CC lib/nbd/nbd_rpc.o 00:02:19.247 CC lib/ublk/ublk_rpc.o 00:02:19.247 CC lib/nvmf/rdma.o 00:02:19.247 CC lib/ftl/ftl_core.o 00:02:19.247 CC lib/ftl/ftl_init.o 00:02:19.247 CC lib/ftl/ftl_layout.o 00:02:19.247 CC lib/ftl/ftl_debug.o 00:02:19.247 CC lib/ftl/ftl_io.o 00:02:19.247 CC lib/ftl/ftl_sb.o 00:02:19.247 CC lib/ftl/ftl_l2p.o 00:02:19.247 CC lib/ftl/ftl_l2p_flat.o 00:02:19.247 CC lib/ftl/ftl_nv_cache.o 00:02:19.247 CC lib/ftl/ftl_band.o 00:02:19.247 CC lib/ftl/ftl_band_ops.o 00:02:19.247 CC lib/ftl/ftl_writer.o 00:02:19.247 CC lib/ftl/ftl_rq.o 00:02:19.247 CC lib/ftl/ftl_reloc.o 00:02:19.247 CC lib/ftl/ftl_l2p_cache.o 00:02:19.247 CC lib/ftl/ftl_p2l.o 00:02:19.247 CC lib/ftl/ftl_p2l_log.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.247 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.247 CC lib/ftl/utils/ftl_conf.o 00:02:19.247 CC lib/ftl/utils/ftl_md.o 00:02:19.247 CC lib/ftl/utils/ftl_mempool.o 00:02:19.247 CC lib/ftl/utils/ftl_bitmap.o 00:02:19.247 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:19.247 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:19.247 CC lib/ftl/utils/ftl_property.o 00:02:19.247 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:19.247 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:19.247 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:19.247 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:19.248 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:19.248 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:19.248 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:19.248 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:19.248 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:19.248 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:19.248 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:19.248 CC lib/ftl/base/ftl_base_dev.o 00:02:19.506 CC lib/ftl/ftl_trace.o 00:02:19.506 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.765 LIB libspdk_nbd.a 00:02:19.765 SO libspdk_nbd.so.7.0 00:02:20.028 LIB libspdk_scsi.a 00:02:20.028 SO libspdk_scsi.so.9.0 00:02:20.028 SYMLINK libspdk_nbd.so 00:02:20.028 LIB libspdk_ublk.a 00:02:20.028 SYMLINK libspdk_scsi.so 00:02:20.028 SO libspdk_ublk.so.3.0 00:02:20.028 SYMLINK libspdk_ublk.so 00:02:20.291 LIB libspdk_ftl.a 00:02:20.292 CC lib/iscsi/conn.o 00:02:20.292 CC lib/iscsi/init_grp.o 00:02:20.292 CC lib/iscsi/iscsi.o 00:02:20.292 CC lib/iscsi/param.o 00:02:20.292 CC lib/iscsi/portal_grp.o 00:02:20.292 CC lib/iscsi/tgt_node.o 00:02:20.292 CC lib/iscsi/iscsi_subsystem.o 00:02:20.292 CC lib/iscsi/task.o 00:02:20.292 CC lib/iscsi/iscsi_rpc.o 00:02:20.292 CC lib/vhost/vhost.o 00:02:20.292 CC lib/vhost/vhost_rpc.o 00:02:20.292 CC lib/vhost/vhost_scsi.o 00:02:20.292 CC lib/vhost/vhost_blk.o 00:02:20.292 CC lib/vhost/rte_vhost_user.o 00:02:20.552 SO libspdk_ftl.so.9.0 00:02:20.813 SYMLINK libspdk_ftl.so 00:02:21.385 LIB libspdk_nvmf.a 00:02:21.385 SO libspdk_nvmf.so.19.0 00:02:21.385 LIB libspdk_vhost.a 00:02:21.385 SO libspdk_vhost.so.8.0 00:02:21.645 SYMLINK libspdk_nvmf.so 00:02:21.645 SYMLINK libspdk_vhost.so 00:02:21.645 LIB libspdk_iscsi.a 00:02:21.645 SO libspdk_iscsi.so.8.0 00:02:21.905 SYMLINK libspdk_iscsi.so 00:02:22.531 CC module/env_dpdk/env_dpdk_rpc.o 00:02:22.531 CC module/vfu_device/vfu_virtio.o 00:02:22.531 CC module/vfu_device/vfu_virtio_blk.o 00:02:22.531 CC module/vfu_device/vfu_virtio_scsi.o 00:02:22.531 CC module/vfu_device/vfu_virtio_rpc.o 00:02:22.531 CC module/vfu_device/vfu_virtio_fs.o 00:02:22.531 CC module/accel/dsa/accel_dsa.o 00:02:22.531 CC module/blob/bdev/blob_bdev.o 00:02:22.531 CC module/accel/dsa/accel_dsa_rpc.o 00:02:22.531 CC module/accel/ioat/accel_ioat.o 00:02:22.531 CC module/scheduler/gscheduler/gscheduler.o 00:02:22.531 CC module/accel/ioat/accel_ioat_rpc.o 00:02:22.531 CC module/accel/error/accel_error.o 00:02:22.531 CC module/accel/error/accel_error_rpc.o 00:02:22.531 CC module/sock/posix/posix.o 00:02:22.531 LIB libspdk_env_dpdk_rpc.a 00:02:22.531 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:22.531 CC module/fsdev/aio/fsdev_aio.o 00:02:22.531 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:22.531 CC module/keyring/linux/keyring.o 00:02:22.531 CC module/fsdev/aio/linux_aio_mgr.o 00:02:22.531 CC module/keyring/linux/keyring_rpc.o 00:02:22.531 CC module/keyring/file/keyring.o 00:02:22.531 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:22.531 CC module/accel/iaa/accel_iaa.o 00:02:22.531 CC module/keyring/file/keyring_rpc.o 00:02:22.531 CC module/accel/iaa/accel_iaa_rpc.o 00:02:22.531 SO libspdk_env_dpdk_rpc.so.6.0 00:02:22.833 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.833 LIB libspdk_keyring_linux.a 00:02:22.833 LIB libspdk_scheduler_gscheduler.a 00:02:22.833 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.833 LIB libspdk_keyring_file.a 00:02:22.833 LIB libspdk_accel_ioat.a 00:02:22.833 LIB libspdk_accel_error.a 00:02:22.833 SO libspdk_scheduler_gscheduler.so.4.0 00:02:22.833 SO libspdk_keyring_linux.so.1.0 00:02:22.833 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:22.833 SO libspdk_keyring_file.so.2.0 00:02:22.833 LIB libspdk_scheduler_dynamic.a 00:02:22.833 SO libspdk_accel_ioat.so.6.0 00:02:22.833 SO libspdk_accel_error.so.2.0 00:02:22.833 LIB libspdk_accel_iaa.a 00:02:22.833 LIB libspdk_blob_bdev.a 00:02:22.833 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.833 LIB libspdk_accel_dsa.a 00:02:22.833 SO libspdk_scheduler_dynamic.so.4.0 00:02:22.833 SYMLINK libspdk_keyring_file.so 00:02:22.833 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.833 SYMLINK libspdk_accel_ioat.so 00:02:22.833 SYMLINK libspdk_keyring_linux.so 00:02:22.833 SO libspdk_blob_bdev.so.11.0 00:02:22.833 SO libspdk_accel_iaa.so.3.0 00:02:22.833 SO libspdk_accel_dsa.so.5.0 00:02:22.833 SYMLINK libspdk_accel_error.so 00:02:23.095 SYMLINK libspdk_scheduler_dynamic.so 00:02:23.095 SYMLINK libspdk_blob_bdev.so 00:02:23.095 LIB libspdk_vfu_device.a 00:02:23.095 SYMLINK libspdk_accel_iaa.so 00:02:23.095 SYMLINK libspdk_accel_dsa.so 00:02:23.095 SO libspdk_vfu_device.so.3.0 00:02:23.095 SYMLINK libspdk_vfu_device.so 00:02:23.356 LIB libspdk_fsdev_aio.a 00:02:23.356 SO libspdk_fsdev_aio.so.1.0 00:02:23.356 LIB libspdk_sock_posix.a 00:02:23.356 SO libspdk_sock_posix.so.6.0 00:02:23.356 SYMLINK libspdk_fsdev_aio.so 00:02:23.356 SYMLINK libspdk_sock_posix.so 00:02:23.616 CC module/bdev/gpt/gpt.o 00:02:23.616 CC module/bdev/delay/vbdev_delay.o 00:02:23.616 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:23.616 CC module/bdev/gpt/vbdev_gpt.o 00:02:23.616 CC module/bdev/error/vbdev_error.o 00:02:23.616 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:23.616 CC module/blobfs/bdev/blobfs_bdev.o 00:02:23.616 CC module/bdev/error/vbdev_error_rpc.o 00:02:23.616 CC module/bdev/lvol/vbdev_lvol.o 00:02:23.616 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:23.616 CC module/bdev/malloc/bdev_malloc.o 00:02:23.616 CC module/bdev/ftl/bdev_ftl.o 00:02:23.616 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:23.616 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:23.616 CC module/bdev/raid/bdev_raid.o 00:02:23.616 CC module/bdev/split/vbdev_split.o 00:02:23.616 CC module/bdev/raid/bdev_raid_rpc.o 00:02:23.616 CC module/bdev/aio/bdev_aio.o 00:02:23.616 CC module/bdev/raid/bdev_raid_sb.o 00:02:23.616 CC module/bdev/split/vbdev_split_rpc.o 00:02:23.616 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:23.616 CC module/bdev/aio/bdev_aio_rpc.o 00:02:23.616 CC module/bdev/nvme/bdev_nvme.o 00:02:23.616 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:23.616 CC module/bdev/raid/raid0.o 00:02:23.616 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:23.616 CC module/bdev/passthru/vbdev_passthru.o 00:02:23.616 CC module/bdev/raid/raid1.o 00:02:23.616 CC module/bdev/null/bdev_null.o 00:02:23.616 CC module/bdev/nvme/nvme_rpc.o 00:02:23.616 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:23.616 CC module/bdev/raid/concat.o 00:02:23.616 CC module/bdev/iscsi/bdev_iscsi.o 00:02:23.616 CC module/bdev/null/bdev_null_rpc.o 00:02:23.616 CC module/bdev/nvme/bdev_mdns_client.o 00:02:23.616 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:23.616 CC module/bdev/nvme/vbdev_opal.o 00:02:23.616 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:23.616 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:23.616 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:23.616 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:23.616 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:23.877 LIB libspdk_blobfs_bdev.a 00:02:23.877 SO libspdk_blobfs_bdev.so.6.0 00:02:23.877 LIB libspdk_bdev_split.a 00:02:23.877 LIB libspdk_bdev_gpt.a 00:02:23.877 LIB libspdk_bdev_error.a 00:02:23.877 SO libspdk_bdev_split.so.6.0 00:02:23.877 SO libspdk_bdev_gpt.so.6.0 00:02:23.877 SYMLINK libspdk_blobfs_bdev.so 00:02:23.877 SO libspdk_bdev_error.so.6.0 00:02:23.877 LIB libspdk_bdev_ftl.a 00:02:23.877 LIB libspdk_bdev_null.a 00:02:23.877 SYMLINK libspdk_bdev_split.so 00:02:23.877 SO libspdk_bdev_null.so.6.0 00:02:23.877 LIB libspdk_bdev_passthru.a 00:02:23.877 LIB libspdk_bdev_delay.a 00:02:24.138 SO libspdk_bdev_ftl.so.6.0 00:02:24.138 SYMLINK libspdk_bdev_gpt.so 00:02:24.138 LIB libspdk_bdev_zone_block.a 00:02:24.138 SYMLINK libspdk_bdev_error.so 00:02:24.138 LIB libspdk_bdev_aio.a 00:02:24.138 LIB libspdk_bdev_malloc.a 00:02:24.138 SO libspdk_bdev_passthru.so.6.0 00:02:24.138 SO libspdk_bdev_delay.so.6.0 00:02:24.138 SYMLINK libspdk_bdev_null.so 00:02:24.138 LIB libspdk_bdev_iscsi.a 00:02:24.138 SO libspdk_bdev_zone_block.so.6.0 00:02:24.138 SO libspdk_bdev_aio.so.6.0 00:02:24.138 SO libspdk_bdev_malloc.so.6.0 00:02:24.138 SYMLINK libspdk_bdev_ftl.so 00:02:24.138 SO libspdk_bdev_iscsi.so.6.0 00:02:24.138 SYMLINK libspdk_bdev_delay.so 00:02:24.138 SYMLINK libspdk_bdev_passthru.so 00:02:24.138 LIB libspdk_bdev_lvol.a 00:02:24.138 SYMLINK libspdk_bdev_zone_block.so 00:02:24.138 SYMLINK libspdk_bdev_aio.so 00:02:24.138 SYMLINK libspdk_bdev_malloc.so 00:02:24.138 SO libspdk_bdev_lvol.so.6.0 00:02:24.138 SYMLINK libspdk_bdev_iscsi.so 00:02:24.138 LIB libspdk_bdev_virtio.a 00:02:24.138 SO libspdk_bdev_virtio.so.6.0 00:02:24.138 SYMLINK libspdk_bdev_lvol.so 00:02:24.399 SYMLINK libspdk_bdev_virtio.so 00:02:24.660 LIB libspdk_bdev_raid.a 00:02:24.660 SO libspdk_bdev_raid.so.6.0 00:02:24.660 SYMLINK libspdk_bdev_raid.so 00:02:25.602 LIB libspdk_bdev_nvme.a 00:02:25.864 SO libspdk_bdev_nvme.so.7.0 00:02:25.864 SYMLINK libspdk_bdev_nvme.so 00:02:26.809 CC module/event/subsystems/iobuf/iobuf.o 00:02:26.809 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:26.809 CC module/event/subsystems/vmd/vmd.o 00:02:26.809 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:26.809 CC module/event/subsystems/keyring/keyring.o 00:02:26.809 CC module/event/subsystems/sock/sock.o 00:02:26.809 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:26.809 CC module/event/subsystems/scheduler/scheduler.o 00:02:26.809 CC module/event/subsystems/fsdev/fsdev.o 00:02:26.809 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:26.809 LIB libspdk_event_fsdev.a 00:02:26.809 LIB libspdk_event_keyring.a 00:02:26.809 LIB libspdk_event_vhost_blk.a 00:02:26.809 LIB libspdk_event_vmd.a 00:02:26.809 LIB libspdk_event_sock.a 00:02:26.809 LIB libspdk_event_iobuf.a 00:02:26.809 LIB libspdk_event_vfu_tgt.a 00:02:26.809 LIB libspdk_event_scheduler.a 00:02:26.809 SO libspdk_event_fsdev.so.1.0 00:02:26.809 SO libspdk_event_keyring.so.1.0 00:02:26.809 SO libspdk_event_vhost_blk.so.3.0 00:02:26.809 SO libspdk_event_sock.so.5.0 00:02:26.809 SO libspdk_event_vmd.so.6.0 00:02:26.809 SO libspdk_event_vfu_tgt.so.3.0 00:02:26.809 SO libspdk_event_scheduler.so.4.0 00:02:26.809 SO libspdk_event_iobuf.so.3.0 00:02:26.809 SYMLINK libspdk_event_fsdev.so 00:02:26.809 SYMLINK libspdk_event_keyring.so 00:02:26.809 SYMLINK libspdk_event_vhost_blk.so 00:02:26.809 SYMLINK libspdk_event_sock.so 00:02:26.809 SYMLINK libspdk_event_vmd.so 00:02:26.809 SYMLINK libspdk_event_vfu_tgt.so 00:02:26.809 SYMLINK libspdk_event_scheduler.so 00:02:26.809 SYMLINK libspdk_event_iobuf.so 00:02:27.382 CC module/event/subsystems/accel/accel.o 00:02:27.382 LIB libspdk_event_accel.a 00:02:27.382 SO libspdk_event_accel.so.6.0 00:02:27.382 SYMLINK libspdk_event_accel.so 00:02:27.955 CC module/event/subsystems/bdev/bdev.o 00:02:27.955 LIB libspdk_event_bdev.a 00:02:27.955 SO libspdk_event_bdev.so.6.0 00:02:28.216 SYMLINK libspdk_event_bdev.so 00:02:28.477 CC module/event/subsystems/scsi/scsi.o 00:02:28.477 CC module/event/subsystems/nbd/nbd.o 00:02:28.477 CC module/event/subsystems/ublk/ublk.o 00:02:28.477 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:28.477 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:28.738 LIB libspdk_event_nbd.a 00:02:28.738 LIB libspdk_event_ublk.a 00:02:28.738 LIB libspdk_event_scsi.a 00:02:28.738 SO libspdk_event_nbd.so.6.0 00:02:28.738 SO libspdk_event_ublk.so.3.0 00:02:28.738 SO libspdk_event_scsi.so.6.0 00:02:28.738 LIB libspdk_event_nvmf.a 00:02:28.738 SYMLINK libspdk_event_nbd.so 00:02:28.738 SYMLINK libspdk_event_ublk.so 00:02:28.738 SO libspdk_event_nvmf.so.6.0 00:02:28.738 SYMLINK libspdk_event_scsi.so 00:02:28.738 SYMLINK libspdk_event_nvmf.so 00:02:28.999 CC module/event/subsystems/iscsi/iscsi.o 00:02:29.261 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:29.261 LIB libspdk_event_vhost_scsi.a 00:02:29.261 LIB libspdk_event_iscsi.a 00:02:29.261 SO libspdk_event_vhost_scsi.so.3.0 00:02:29.261 SO libspdk_event_iscsi.so.6.0 00:02:29.522 SYMLINK libspdk_event_vhost_scsi.so 00:02:29.522 SYMLINK libspdk_event_iscsi.so 00:02:29.522 SO libspdk.so.6.0 00:02:29.522 SYMLINK libspdk.so 00:02:30.095 CC app/trace_record/trace_record.o 00:02:30.095 CXX app/trace/trace.o 00:02:30.095 TEST_HEADER include/spdk/accel.h 00:02:30.095 TEST_HEADER include/spdk/accel_module.h 00:02:30.095 CC app/spdk_nvme_identify/identify.o 00:02:30.095 CC app/spdk_top/spdk_top.o 00:02:30.095 TEST_HEADER include/spdk/assert.h 00:02:30.095 TEST_HEADER include/spdk/barrier.h 00:02:30.095 TEST_HEADER include/spdk/base64.h 00:02:30.095 TEST_HEADER include/spdk/bdev.h 00:02:30.095 TEST_HEADER include/spdk/bdev_module.h 00:02:30.095 CC test/rpc_client/rpc_client_test.o 00:02:30.095 CC app/spdk_nvme_discover/discovery_aer.o 00:02:30.095 TEST_HEADER include/spdk/bdev_zone.h 00:02:30.095 TEST_HEADER include/spdk/bit_array.h 00:02:30.095 TEST_HEADER include/spdk/bit_pool.h 00:02:30.095 CC app/spdk_lspci/spdk_lspci.o 00:02:30.095 TEST_HEADER include/spdk/blob_bdev.h 00:02:30.095 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:30.095 TEST_HEADER include/spdk/blobfs.h 00:02:30.095 CC app/spdk_nvme_perf/perf.o 00:02:30.095 TEST_HEADER include/spdk/blob.h 00:02:30.095 TEST_HEADER include/spdk/conf.h 00:02:30.095 TEST_HEADER include/spdk/config.h 00:02:30.095 TEST_HEADER include/spdk/cpuset.h 00:02:30.095 TEST_HEADER include/spdk/crc16.h 00:02:30.095 TEST_HEADER include/spdk/crc64.h 00:02:30.095 TEST_HEADER include/spdk/crc32.h 00:02:30.095 TEST_HEADER include/spdk/dif.h 00:02:30.095 TEST_HEADER include/spdk/dma.h 00:02:30.095 TEST_HEADER include/spdk/endian.h 00:02:30.095 TEST_HEADER include/spdk/env_dpdk.h 00:02:30.095 TEST_HEADER include/spdk/event.h 00:02:30.095 TEST_HEADER include/spdk/env.h 00:02:30.095 TEST_HEADER include/spdk/fd_group.h 00:02:30.095 TEST_HEADER include/spdk/fd.h 00:02:30.095 TEST_HEADER include/spdk/file.h 00:02:30.095 TEST_HEADER include/spdk/fsdev.h 00:02:30.095 TEST_HEADER include/spdk/fsdev_module.h 00:02:30.095 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:30.095 TEST_HEADER include/spdk/ftl.h 00:02:30.095 TEST_HEADER include/spdk/gpt_spec.h 00:02:30.095 TEST_HEADER include/spdk/hexlify.h 00:02:30.095 CC app/iscsi_tgt/iscsi_tgt.o 00:02:30.095 TEST_HEADER include/spdk/histogram_data.h 00:02:30.095 TEST_HEADER include/spdk/idxd.h 00:02:30.095 TEST_HEADER include/spdk/idxd_spec.h 00:02:30.095 TEST_HEADER include/spdk/ioat.h 00:02:30.095 TEST_HEADER include/spdk/init.h 00:02:30.095 TEST_HEADER include/spdk/iscsi_spec.h 00:02:30.095 TEST_HEADER include/spdk/ioat_spec.h 00:02:30.095 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:30.095 TEST_HEADER include/spdk/json.h 00:02:30.095 TEST_HEADER include/spdk/jsonrpc.h 00:02:30.095 TEST_HEADER include/spdk/keyring.h 00:02:30.095 TEST_HEADER include/spdk/keyring_module.h 00:02:30.095 CC app/spdk_dd/spdk_dd.o 00:02:30.095 TEST_HEADER include/spdk/likely.h 00:02:30.095 TEST_HEADER include/spdk/log.h 00:02:30.095 TEST_HEADER include/spdk/md5.h 00:02:30.095 TEST_HEADER include/spdk/lvol.h 00:02:30.095 TEST_HEADER include/spdk/memory.h 00:02:30.095 CC app/nvmf_tgt/nvmf_main.o 00:02:30.095 TEST_HEADER include/spdk/mmio.h 00:02:30.095 TEST_HEADER include/spdk/nbd.h 00:02:30.095 TEST_HEADER include/spdk/net.h 00:02:30.095 TEST_HEADER include/spdk/notify.h 00:02:30.095 TEST_HEADER include/spdk/nvme.h 00:02:30.095 TEST_HEADER include/spdk/nvme_intel.h 00:02:30.095 CC app/spdk_tgt/spdk_tgt.o 00:02:30.095 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:30.095 TEST_HEADER include/spdk/nvme_spec.h 00:02:30.095 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:30.095 TEST_HEADER include/spdk/nvme_zns.h 00:02:30.095 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:30.095 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:30.095 TEST_HEADER include/spdk/nvmf.h 00:02:30.095 TEST_HEADER include/spdk/nvmf_spec.h 00:02:30.095 TEST_HEADER include/spdk/nvmf_transport.h 00:02:30.095 TEST_HEADER include/spdk/opal_spec.h 00:02:30.095 TEST_HEADER include/spdk/opal.h 00:02:30.095 TEST_HEADER include/spdk/pci_ids.h 00:02:30.095 TEST_HEADER include/spdk/pipe.h 00:02:30.095 TEST_HEADER include/spdk/queue.h 00:02:30.095 TEST_HEADER include/spdk/reduce.h 00:02:30.095 TEST_HEADER include/spdk/rpc.h 00:02:30.095 TEST_HEADER include/spdk/scheduler.h 00:02:30.095 TEST_HEADER include/spdk/scsi.h 00:02:30.095 TEST_HEADER include/spdk/scsi_spec.h 00:02:30.095 TEST_HEADER include/spdk/sock.h 00:02:30.095 TEST_HEADER include/spdk/stdinc.h 00:02:30.095 TEST_HEADER include/spdk/string.h 00:02:30.095 TEST_HEADER include/spdk/thread.h 00:02:30.095 TEST_HEADER include/spdk/trace.h 00:02:30.095 TEST_HEADER include/spdk/trace_parser.h 00:02:30.095 TEST_HEADER include/spdk/ublk.h 00:02:30.095 TEST_HEADER include/spdk/tree.h 00:02:30.095 TEST_HEADER include/spdk/util.h 00:02:30.095 TEST_HEADER include/spdk/uuid.h 00:02:30.095 TEST_HEADER include/spdk/version.h 00:02:30.095 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:30.095 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:30.095 TEST_HEADER include/spdk/xor.h 00:02:30.095 TEST_HEADER include/spdk/vhost.h 00:02:30.095 TEST_HEADER include/spdk/vmd.h 00:02:30.095 TEST_HEADER include/spdk/zipf.h 00:02:30.095 CXX test/cpp_headers/accel.o 00:02:30.095 CXX test/cpp_headers/accel_module.o 00:02:30.095 CXX test/cpp_headers/assert.o 00:02:30.095 CXX test/cpp_headers/barrier.o 00:02:30.095 CXX test/cpp_headers/base64.o 00:02:30.095 CXX test/cpp_headers/bdev.o 00:02:30.095 CXX test/cpp_headers/bdev_module.o 00:02:30.095 CXX test/cpp_headers/bdev_zone.o 00:02:30.095 CXX test/cpp_headers/bit_array.o 00:02:30.095 CXX test/cpp_headers/bit_pool.o 00:02:30.095 CXX test/cpp_headers/blob_bdev.o 00:02:30.095 CXX test/cpp_headers/blobfs.o 00:02:30.095 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.095 CXX test/cpp_headers/blob.o 00:02:30.095 CXX test/cpp_headers/config.o 00:02:30.095 CXX test/cpp_headers/conf.o 00:02:30.095 CXX test/cpp_headers/cpuset.o 00:02:30.095 CXX test/cpp_headers/crc16.o 00:02:30.095 CXX test/cpp_headers/crc32.o 00:02:30.095 CXX test/cpp_headers/crc64.o 00:02:30.095 CXX test/cpp_headers/dma.o 00:02:30.095 CXX test/cpp_headers/dif.o 00:02:30.095 CXX test/cpp_headers/endian.o 00:02:30.095 CXX test/cpp_headers/env_dpdk.o 00:02:30.095 CXX test/cpp_headers/env.o 00:02:30.095 CXX test/cpp_headers/event.o 00:02:30.095 CXX test/cpp_headers/fd_group.o 00:02:30.095 CXX test/cpp_headers/fd.o 00:02:30.095 CXX test/cpp_headers/file.o 00:02:30.095 CXX test/cpp_headers/fsdev.o 00:02:30.095 CXX test/cpp_headers/fsdev_module.o 00:02:30.095 CXX test/cpp_headers/ftl.o 00:02:30.095 CXX test/cpp_headers/gpt_spec.o 00:02:30.095 CXX test/cpp_headers/hexlify.o 00:02:30.095 CXX test/cpp_headers/fuse_dispatcher.o 00:02:30.095 CXX test/cpp_headers/idxd.o 00:02:30.095 CXX test/cpp_headers/idxd_spec.o 00:02:30.363 CXX test/cpp_headers/histogram_data.o 00:02:30.363 CXX test/cpp_headers/init.o 00:02:30.363 CXX test/cpp_headers/ioat_spec.o 00:02:30.363 CXX test/cpp_headers/iscsi_spec.o 00:02:30.363 CXX test/cpp_headers/ioat.o 00:02:30.363 CXX test/cpp_headers/keyring.o 00:02:30.363 CXX test/cpp_headers/jsonrpc.o 00:02:30.363 CXX test/cpp_headers/keyring_module.o 00:02:30.363 CXX test/cpp_headers/json.o 00:02:30.363 CXX test/cpp_headers/lvol.o 00:02:30.363 CXX test/cpp_headers/likely.o 00:02:30.363 CXX test/cpp_headers/log.o 00:02:30.363 CXX test/cpp_headers/memory.o 00:02:30.363 CXX test/cpp_headers/md5.o 00:02:30.363 CXX test/cpp_headers/net.o 00:02:30.363 CXX test/cpp_headers/mmio.o 00:02:30.363 CXX test/cpp_headers/nbd.o 00:02:30.363 CC examples/util/zipf/zipf.o 00:02:30.363 CXX test/cpp_headers/nvme.o 00:02:30.363 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:30.363 CXX test/cpp_headers/notify.o 00:02:30.363 CXX test/cpp_headers/nvme_intel.o 00:02:30.363 CXX test/cpp_headers/nvme_spec.o 00:02:30.363 CXX test/cpp_headers/nvme_ocssd.o 00:02:30.363 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:30.363 CXX test/cpp_headers/nvme_zns.o 00:02:30.363 CXX test/cpp_headers/nvmf_cmd.o 00:02:30.363 CXX test/cpp_headers/nvmf_spec.o 00:02:30.363 CXX test/cpp_headers/nvmf.o 00:02:30.363 CXX test/cpp_headers/nvmf_transport.o 00:02:30.363 CC test/app/jsoncat/jsoncat.o 00:02:30.363 CXX test/cpp_headers/opal.o 00:02:30.363 CXX test/cpp_headers/pci_ids.o 00:02:30.363 CC test/app/stub/stub.o 00:02:30.363 CXX test/cpp_headers/opal_spec.o 00:02:30.363 CXX test/cpp_headers/pipe.o 00:02:30.363 CXX test/cpp_headers/queue.o 00:02:30.363 CC examples/ioat/verify/verify.o 00:02:30.363 CXX test/cpp_headers/reduce.o 00:02:30.363 CXX test/cpp_headers/rpc.o 00:02:30.363 CXX test/cpp_headers/scsi_spec.o 00:02:30.363 CXX test/cpp_headers/scsi.o 00:02:30.363 CXX test/cpp_headers/scheduler.o 00:02:30.363 CXX test/cpp_headers/sock.o 00:02:30.363 CXX test/cpp_headers/thread.o 00:02:30.363 CXX test/cpp_headers/string.o 00:02:30.363 CXX test/cpp_headers/stdinc.o 00:02:30.363 CXX test/cpp_headers/trace.o 00:02:30.363 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.363 CC examples/ioat/perf/perf.o 00:02:30.363 CC test/thread/poller_perf/poller_perf.o 00:02:30.363 CXX test/cpp_headers/trace_parser.o 00:02:30.363 CXX test/cpp_headers/util.o 00:02:30.363 CXX test/cpp_headers/uuid.o 00:02:30.363 CC test/env/vtophys/vtophys.o 00:02:30.363 CXX test/cpp_headers/tree.o 00:02:30.363 CC test/app/histogram_perf/histogram_perf.o 00:02:30.363 CXX test/cpp_headers/version.o 00:02:30.363 CXX test/cpp_headers/ublk.o 00:02:30.363 CXX test/cpp_headers/vfio_user_spec.o 00:02:30.363 CXX test/cpp_headers/vfio_user_pci.o 00:02:30.363 CXX test/cpp_headers/vmd.o 00:02:30.363 CXX test/cpp_headers/vhost.o 00:02:30.363 CXX test/cpp_headers/xor.o 00:02:30.363 CXX test/cpp_headers/zipf.o 00:02:30.363 CC app/fio/nvme/fio_plugin.o 00:02:30.363 CC test/env/memory/memory_ut.o 00:02:30.363 CC test/dma/test_dma/test_dma.o 00:02:30.363 LINK spdk_lspci 00:02:30.363 CC test/env/pci/pci_ut.o 00:02:30.363 CC app/fio/bdev/fio_plugin.o 00:02:30.363 CC test/app/bdev_svc/bdev_svc.o 00:02:30.630 LINK spdk_nvme_discover 00:02:30.630 LINK interrupt_tgt 00:02:30.630 LINK rpc_client_test 00:02:30.630 LINK nvmf_tgt 00:02:30.892 LINK iscsi_tgt 00:02:30.892 CC test/env/mem_callbacks/mem_callbacks.o 00:02:30.892 LINK spdk_trace_record 00:02:30.892 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:30.892 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:31.154 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:31.154 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:31.154 LINK spdk_tgt 00:02:31.154 LINK zipf 00:02:31.154 LINK stub 00:02:31.154 LINK jsoncat 00:02:31.154 LINK verify 00:02:31.154 LINK poller_perf 00:02:31.154 LINK ioat_perf 00:02:31.414 LINK histogram_perf 00:02:31.414 LINK env_dpdk_post_init 00:02:31.414 LINK bdev_svc 00:02:31.414 LINK vtophys 00:02:31.414 LINK spdk_trace 00:02:31.414 LINK spdk_dd 00:02:31.676 LINK spdk_nvme 00:02:31.676 LINK vhost_fuzz 00:02:31.676 LINK nvme_fuzz 00:02:31.676 LINK pci_ut 00:02:31.676 CC examples/sock/hello_world/hello_sock.o 00:02:31.676 CC examples/vmd/led/led.o 00:02:31.676 CC examples/vmd/lsvmd/lsvmd.o 00:02:31.676 CC test/event/event_perf/event_perf.o 00:02:31.938 LINK spdk_top 00:02:31.938 CC test/event/reactor_perf/reactor_perf.o 00:02:31.938 CC examples/idxd/perf/perf.o 00:02:31.938 LINK mem_callbacks 00:02:31.938 CC test/event/reactor/reactor.o 00:02:31.938 LINK spdk_bdev 00:02:31.938 CC test/event/app_repeat/app_repeat.o 00:02:31.938 LINK test_dma 00:02:31.938 CC examples/thread/thread/thread_ex.o 00:02:31.938 CC test/event/scheduler/scheduler.o 00:02:31.938 LINK spdk_nvme_identify 00:02:31.938 CC app/vhost/vhost.o 00:02:31.938 LINK lsvmd 00:02:31.938 LINK event_perf 00:02:31.938 LINK spdk_nvme_perf 00:02:31.938 LINK led 00:02:31.938 LINK reactor_perf 00:02:31.938 LINK reactor 00:02:31.938 LINK hello_sock 00:02:32.199 LINK app_repeat 00:02:32.199 LINK scheduler 00:02:32.199 LINK vhost 00:02:32.199 LINK idxd_perf 00:02:32.199 LINK thread 00:02:32.460 LINK memory_ut 00:02:32.460 CC test/nvme/aer/aer.o 00:02:32.460 CC test/nvme/reset/reset.o 00:02:32.460 CC test/nvme/err_injection/err_injection.o 00:02:32.460 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:32.460 CC test/nvme/connect_stress/connect_stress.o 00:02:32.460 CC test/nvme/boot_partition/boot_partition.o 00:02:32.460 CC test/nvme/sgl/sgl.o 00:02:32.460 CC test/nvme/compliance/nvme_compliance.o 00:02:32.460 CC test/nvme/overhead/overhead.o 00:02:32.460 CC test/nvme/reserve/reserve.o 00:02:32.460 CC test/nvme/startup/startup.o 00:02:32.460 CC test/nvme/cuse/cuse.o 00:02:32.460 CC test/nvme/e2edp/nvme_dp.o 00:02:32.460 CC test/nvme/simple_copy/simple_copy.o 00:02:32.460 CC test/nvme/fdp/fdp.o 00:02:32.460 CC test/nvme/fused_ordering/fused_ordering.o 00:02:32.460 CC test/blobfs/mkfs/mkfs.o 00:02:32.460 CC test/accel/dif/dif.o 00:02:32.722 CC examples/nvme/abort/abort.o 00:02:32.722 CC examples/nvme/hotplug/hotplug.o 00:02:32.722 CC examples/nvme/hello_world/hello_world.o 00:02:32.722 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.722 CC examples/nvme/reconnect/reconnect.o 00:02:32.722 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.722 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:32.722 CC examples/nvme/arbitration/arbitration.o 00:02:32.722 CC test/lvol/esnap/esnap.o 00:02:32.722 LINK boot_partition 00:02:32.722 LINK err_injection 00:02:32.722 LINK connect_stress 00:02:32.722 LINK doorbell_aers 00:02:32.722 LINK startup 00:02:32.722 LINK reserve 00:02:32.722 LINK fused_ordering 00:02:32.722 LINK iscsi_fuzz 00:02:32.722 CC examples/accel/perf/accel_perf.o 00:02:32.722 LINK aer 00:02:32.722 LINK simple_copy 00:02:32.722 LINK mkfs 00:02:32.722 LINK reset 00:02:32.722 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:32.722 LINK sgl 00:02:32.722 LINK cmb_copy 00:02:32.722 LINK nvme_dp 00:02:32.722 CC examples/blob/cli/blobcli.o 00:02:32.722 LINK overhead 00:02:32.983 CC examples/blob/hello_world/hello_blob.o 00:02:32.983 LINK pmr_persistence 00:02:32.983 LINK nvme_compliance 00:02:32.983 LINK hotplug 00:02:32.983 LINK fdp 00:02:32.983 LINK hello_world 00:02:32.983 LINK reconnect 00:02:32.983 LINK arbitration 00:02:32.983 LINK abort 00:02:33.245 LINK nvme_manage 00:02:33.245 LINK hello_fsdev 00:02:33.245 LINK hello_blob 00:02:33.245 LINK dif 00:02:33.245 LINK accel_perf 00:02:33.245 LINK blobcli 00:02:33.817 LINK cuse 00:02:33.817 CC test/bdev/bdevio/bdevio.o 00:02:33.817 CC examples/bdev/hello_world/hello_bdev.o 00:02:33.817 CC examples/bdev/bdevperf/bdevperf.o 00:02:34.079 LINK hello_bdev 00:02:34.341 LINK bdevio 00:02:34.603 LINK bdevperf 00:02:35.176 CC examples/nvmf/nvmf/nvmf.o 00:02:35.437 LINK nvmf 00:02:37.355 LINK esnap 00:02:37.355 00:02:37.355 real 0m55.652s 00:02:37.355 user 8m8.489s 00:02:37.355 sys 5m28.789s 00:02:37.355 11:39:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:37.355 11:39:40 make -- common/autotest_common.sh@10 -- $ set +x 00:02:37.355 ************************************ 00:02:37.355 END TEST make 00:02:37.355 ************************************ 00:02:37.618 11:39:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:37.618 11:39:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:37.618 11:39:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:37.618 11:39:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.618 11:39:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:37.618 11:39:40 -- pm/common@44 -- $ pid=1597339 00:02:37.618 11:39:40 -- pm/common@50 -- $ kill -TERM 1597339 00:02:37.618 11:39:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.618 11:39:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:37.618 11:39:40 -- pm/common@44 -- $ pid=1597340 00:02:37.618 11:39:40 -- pm/common@50 -- $ kill -TERM 1597340 00:02:37.618 11:39:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.618 11:39:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:37.618 11:39:40 -- pm/common@44 -- $ pid=1597342 00:02:37.618 11:39:40 -- pm/common@50 -- $ kill -TERM 1597342 00:02:37.618 11:39:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.618 11:39:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:37.618 11:39:40 -- pm/common@44 -- $ pid=1597367 00:02:37.618 11:39:40 -- pm/common@50 -- $ sudo -E kill -TERM 1597367 00:02:37.618 11:39:40 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:37.618 11:39:40 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:37.618 11:39:40 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:37.618 11:39:40 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:37.618 11:39:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:37.618 11:39:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:37.618 11:39:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:37.618 11:39:40 -- scripts/common.sh@336 -- # IFS=.-: 00:02:37.618 11:39:40 -- scripts/common.sh@336 -- # read -ra ver1 00:02:37.618 11:39:40 -- scripts/common.sh@337 -- # IFS=.-: 00:02:37.618 11:39:40 -- scripts/common.sh@337 -- # read -ra ver2 00:02:37.618 11:39:40 -- scripts/common.sh@338 -- # local 'op=<' 00:02:37.618 11:39:40 -- scripts/common.sh@340 -- # ver1_l=2 00:02:37.618 11:39:40 -- scripts/common.sh@341 -- # ver2_l=1 00:02:37.618 11:39:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:37.618 11:39:40 -- scripts/common.sh@344 -- # case "$op" in 00:02:37.618 11:39:40 -- scripts/common.sh@345 -- # : 1 00:02:37.618 11:39:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:37.618 11:39:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:37.618 11:39:40 -- scripts/common.sh@365 -- # decimal 1 00:02:37.618 11:39:40 -- scripts/common.sh@353 -- # local d=1 00:02:37.618 11:39:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:37.618 11:39:40 -- scripts/common.sh@355 -- # echo 1 00:02:37.618 11:39:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:37.618 11:39:40 -- scripts/common.sh@366 -- # decimal 2 00:02:37.881 11:39:40 -- scripts/common.sh@353 -- # local d=2 00:02:37.881 11:39:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:37.881 11:39:40 -- scripts/common.sh@355 -- # echo 2 00:02:37.881 11:39:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:37.881 11:39:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:37.881 11:39:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:37.881 11:39:40 -- scripts/common.sh@368 -- # return 0 00:02:37.881 11:39:40 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:37.881 11:39:40 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:37.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.881 --rc genhtml_branch_coverage=1 00:02:37.881 --rc genhtml_function_coverage=1 00:02:37.881 --rc genhtml_legend=1 00:02:37.881 --rc geninfo_all_blocks=1 00:02:37.881 --rc geninfo_unexecuted_blocks=1 00:02:37.881 00:02:37.881 ' 00:02:37.881 11:39:40 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:37.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.881 --rc genhtml_branch_coverage=1 00:02:37.881 --rc genhtml_function_coverage=1 00:02:37.881 --rc genhtml_legend=1 00:02:37.881 --rc geninfo_all_blocks=1 00:02:37.881 --rc geninfo_unexecuted_blocks=1 00:02:37.881 00:02:37.881 ' 00:02:37.881 11:39:40 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:37.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.881 --rc genhtml_branch_coverage=1 00:02:37.881 --rc genhtml_function_coverage=1 00:02:37.881 --rc genhtml_legend=1 00:02:37.881 --rc geninfo_all_blocks=1 00:02:37.881 --rc geninfo_unexecuted_blocks=1 00:02:37.881 00:02:37.881 ' 00:02:37.881 11:39:40 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:37.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.881 --rc genhtml_branch_coverage=1 00:02:37.881 --rc genhtml_function_coverage=1 00:02:37.881 --rc genhtml_legend=1 00:02:37.881 --rc geninfo_all_blocks=1 00:02:37.881 --rc geninfo_unexecuted_blocks=1 00:02:37.881 00:02:37.881 ' 00:02:37.881 11:39:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:37.881 11:39:40 -- nvmf/common.sh@7 -- # uname -s 00:02:37.881 11:39:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:37.881 11:39:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:37.881 11:39:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:37.881 11:39:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:37.881 11:39:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:37.881 11:39:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:37.881 11:39:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:37.881 11:39:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:37.881 11:39:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:37.881 11:39:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:37.881 11:39:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:37.881 11:39:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:37.881 11:39:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:37.881 11:39:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:37.881 11:39:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:37.881 11:39:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:37.881 11:39:40 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:37.881 11:39:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:37.881 11:39:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:37.881 11:39:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.881 11:39:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.881 11:39:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.881 11:39:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.881 11:39:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.881 11:39:40 -- paths/export.sh@5 -- # export PATH 00:02:37.881 11:39:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.881 11:39:40 -- nvmf/common.sh@51 -- # : 0 00:02:37.881 11:39:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:37.881 11:39:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:37.881 11:39:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:37.881 11:39:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:37.881 11:39:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:37.881 11:39:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:37.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:37.881 11:39:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:37.881 11:39:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:37.881 11:39:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:37.881 11:39:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:37.881 11:39:40 -- spdk/autotest.sh@32 -- # uname -s 00:02:37.881 11:39:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:37.881 11:39:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:37.881 11:39:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:37.881 11:39:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:37.881 11:39:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:37.881 11:39:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:37.881 11:39:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:37.881 11:39:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:37.881 11:39:40 -- spdk/autotest.sh@48 -- # udevadm_pid=1663393 00:02:37.881 11:39:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:37.881 11:39:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:37.881 11:39:40 -- pm/common@17 -- # local monitor 00:02:37.881 11:39:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.881 11:39:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.881 11:39:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.881 11:39:40 -- pm/common@21 -- # date +%s 00:02:37.881 11:39:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.881 11:39:40 -- pm/common@21 -- # date +%s 00:02:37.881 11:39:40 -- pm/common@25 -- # sleep 1 00:02:37.881 11:39:40 -- pm/common@21 -- # date +%s 00:02:37.881 11:39:40 -- pm/common@21 -- # date +%s 00:02:37.881 11:39:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728639580 00:02:37.881 11:39:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728639580 00:02:37.881 11:39:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728639580 00:02:37.881 11:39:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728639580 00:02:37.881 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728639580_collect-cpu-load.pm.log 00:02:37.881 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728639580_collect-vmstat.pm.log 00:02:37.881 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728639580_collect-cpu-temp.pm.log 00:02:37.881 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728639580_collect-bmc-pm.bmc.pm.log 00:02:38.826 11:39:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:38.826 11:39:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:38.826 11:39:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:38.826 11:39:41 -- common/autotest_common.sh@10 -- # set +x 00:02:38.826 11:39:41 -- spdk/autotest.sh@59 -- # create_test_list 00:02:38.826 11:39:41 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:38.826 11:39:41 -- common/autotest_common.sh@10 -- # set +x 00:02:38.826 11:39:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:38.826 11:39:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.826 11:39:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.826 11:39:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:38.826 11:39:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.826 11:39:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:38.826 11:39:41 -- common/autotest_common.sh@1455 -- # uname 00:02:38.826 11:39:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:38.826 11:39:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:38.826 11:39:41 -- common/autotest_common.sh@1475 -- # uname 00:02:38.826 11:39:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:38.826 11:39:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:38.826 11:39:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:39.087 lcov: LCOV version 1.15 00:02:39.087 11:39:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:54.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:54.004 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:08.919 11:40:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:08.919 11:40:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:08.919 11:40:11 -- common/autotest_common.sh@10 -- # set +x 00:03:08.919 11:40:11 -- spdk/autotest.sh@78 -- # rm -f 00:03:08.919 11:40:11 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.126 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:13.126 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:13.126 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:13.126 11:40:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:13.126 11:40:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:13.126 11:40:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:13.126 11:40:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:13.126 11:40:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:13.126 11:40:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:13.126 11:40:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:13.126 11:40:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.126 11:40:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:13.126 11:40:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:13.126 11:40:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:13.126 11:40:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:13.126 11:40:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:13.126 11:40:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:13.126 11:40:15 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:13.391 No valid GPT data, bailing 00:03:13.391 11:40:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:13.391 11:40:15 -- scripts/common.sh@394 -- # pt= 00:03:13.391 11:40:15 -- scripts/common.sh@395 -- # return 1 00:03:13.391 11:40:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:13.391 1+0 records in 00:03:13.391 1+0 records out 00:03:13.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460947 s, 227 MB/s 00:03:13.391 11:40:15 -- spdk/autotest.sh@105 -- # sync 00:03:13.391 11:40:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:13.391 11:40:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:13.391 11:40:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:23.394 11:40:24 -- spdk/autotest.sh@111 -- # uname -s 00:03:23.394 11:40:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:23.394 11:40:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:23.394 11:40:24 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:25.407 Hugepages 00:03:25.407 node hugesize free / total 00:03:25.407 node0 1048576kB 0 / 0 00:03:25.407 node0 2048kB 0 / 0 00:03:25.407 node1 1048576kB 0 / 0 00:03:25.407 node1 2048kB 0 / 0 00:03:25.407 00:03:25.407 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.407 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:25.407 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:25.407 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:25.407 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:25.407 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:25.407 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:25.407 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:25.407 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:25.668 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:25.668 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:25.668 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:25.668 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:25.668 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:25.668 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:25.668 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:25.668 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:25.668 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:25.668 11:40:28 -- spdk/autotest.sh@117 -- # uname -s 00:03:25.668 11:40:28 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:25.668 11:40:28 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:25.668 11:40:28 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.878 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:29.878 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:31.260 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:31.521 11:40:34 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:32.464 11:40:35 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:32.464 11:40:35 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:32.464 11:40:35 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.464 11:40:35 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:32.464 11:40:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:32.464 11:40:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:32.464 11:40:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.464 11:40:35 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.464 11:40:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:32.464 11:40:35 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:32.464 11:40:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:32.464 11:40:35 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.674 Waiting for block devices as requested 00:03:36.674 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:36.674 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:36.674 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:36.674 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:36.674 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:36.674 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:36.674 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:36.674 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:36.674 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:36.936 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:36.936 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:37.197 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:37.197 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:37.197 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:37.458 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:37.458 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:37.458 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:38.031 11:40:40 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:38.031 11:40:40 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:38.031 11:40:40 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:38.031 11:40:40 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:38.031 11:40:40 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:38.031 11:40:40 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:38.031 11:40:40 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:38.031 11:40:40 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:38.032 11:40:40 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:38.032 11:40:40 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:38.032 11:40:40 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:38.032 11:40:40 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:38.032 11:40:40 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:38.032 11:40:40 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:38.032 11:40:40 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:38.032 11:40:40 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:38.032 11:40:40 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:38.032 11:40:40 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:38.032 11:40:40 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:38.032 11:40:40 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:38.032 11:40:40 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:38.032 11:40:40 -- common/autotest_common.sh@1541 -- # continue 00:03:38.032 11:40:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:38.032 11:40:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:38.032 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:03:38.032 11:40:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:38.032 11:40:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:38.032 11:40:40 -- common/autotest_common.sh@10 -- # set +x 00:03:38.032 11:40:40 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.244 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:42.244 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:42.244 11:40:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:42.244 11:40:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.244 11:40:44 -- common/autotest_common.sh@10 -- # set +x 00:03:42.244 11:40:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:42.244 11:40:44 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:42.244 11:40:44 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:42.244 11:40:44 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:42.244 11:40:44 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:42.244 11:40:44 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:42.244 11:40:44 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:42.244 11:40:44 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:42.244 11:40:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:42.244 11:40:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:42.244 11:40:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.244 11:40:44 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.244 11:40:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:42.244 11:40:44 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:42.244 11:40:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:42.244 11:40:44 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:42.244 11:40:44 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:42.244 11:40:44 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:42.244 11:40:44 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:42.244 11:40:44 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:42.244 11:40:44 -- common/autotest_common.sh@1570 -- # return 0 00:03:42.244 11:40:44 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:42.244 11:40:44 -- common/autotest_common.sh@1578 -- # return 0 00:03:42.244 11:40:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:42.244 11:40:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:42.244 11:40:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.244 11:40:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.244 11:40:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:42.244 11:40:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.244 11:40:44 -- common/autotest_common.sh@10 -- # set +x 00:03:42.244 11:40:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:42.244 11:40:44 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.244 11:40:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.244 11:40:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.244 11:40:44 -- common/autotest_common.sh@10 -- # set +x 00:03:42.244 ************************************ 00:03:42.244 START TEST env 00:03:42.244 ************************************ 00:03:42.244 11:40:44 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.507 * Looking for test storage... 00:03:42.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:42.507 11:40:44 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:42.507 11:40:44 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:42.507 11:40:44 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:42.507 11:40:45 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:42.507 11:40:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.507 11:40:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.507 11:40:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.507 11:40:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.507 11:40:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.507 11:40:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.507 11:40:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.507 11:40:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.507 11:40:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.507 11:40:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.507 11:40:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.507 11:40:45 env -- scripts/common.sh@344 -- # case "$op" in 00:03:42.507 11:40:45 env -- scripts/common.sh@345 -- # : 1 00:03:42.507 11:40:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.507 11:40:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.507 11:40:45 env -- scripts/common.sh@365 -- # decimal 1 00:03:42.507 11:40:45 env -- scripts/common.sh@353 -- # local d=1 00:03:42.507 11:40:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.507 11:40:45 env -- scripts/common.sh@355 -- # echo 1 00:03:42.507 11:40:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.507 11:40:45 env -- scripts/common.sh@366 -- # decimal 2 00:03:42.507 11:40:45 env -- scripts/common.sh@353 -- # local d=2 00:03:42.507 11:40:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.507 11:40:45 env -- scripts/common.sh@355 -- # echo 2 00:03:42.507 11:40:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.507 11:40:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.507 11:40:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.507 11:40:45 env -- scripts/common.sh@368 -- # return 0 00:03:42.507 11:40:45 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.507 11:40:45 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:42.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.507 --rc genhtml_branch_coverage=1 00:03:42.507 --rc genhtml_function_coverage=1 00:03:42.507 --rc genhtml_legend=1 00:03:42.507 --rc geninfo_all_blocks=1 00:03:42.507 --rc geninfo_unexecuted_blocks=1 00:03:42.507 00:03:42.507 ' 00:03:42.507 11:40:45 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:42.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.507 --rc genhtml_branch_coverage=1 00:03:42.507 --rc genhtml_function_coverage=1 00:03:42.507 --rc genhtml_legend=1 00:03:42.507 --rc geninfo_all_blocks=1 00:03:42.507 --rc geninfo_unexecuted_blocks=1 00:03:42.507 00:03:42.507 ' 00:03:42.507 11:40:45 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:42.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.507 --rc genhtml_branch_coverage=1 00:03:42.507 --rc genhtml_function_coverage=1 00:03:42.507 --rc genhtml_legend=1 00:03:42.507 --rc geninfo_all_blocks=1 00:03:42.507 --rc geninfo_unexecuted_blocks=1 00:03:42.507 00:03:42.507 ' 00:03:42.507 11:40:45 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:42.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.507 --rc genhtml_branch_coverage=1 00:03:42.507 --rc genhtml_function_coverage=1 00:03:42.507 --rc genhtml_legend=1 00:03:42.507 --rc geninfo_all_blocks=1 00:03:42.507 --rc geninfo_unexecuted_blocks=1 00:03:42.507 00:03:42.507 ' 00:03:42.507 11:40:45 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.507 11:40:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.507 11:40:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.507 11:40:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.507 ************************************ 00:03:42.507 START TEST env_memory 00:03:42.507 ************************************ 00:03:42.507 11:40:45 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.507 00:03:42.507 00:03:42.507 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.507 http://cunit.sourceforge.net/ 00:03:42.507 00:03:42.507 00:03:42.507 Suite: memory 00:03:42.507 Test: alloc and free memory map ...[2024-10-11 11:40:45.164095] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.507 passed 00:03:42.507 Test: mem map translation ...[2024-10-11 11:40:45.189571] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.507 [2024-10-11 11:40:45.189602] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.507 [2024-10-11 11:40:45.189648] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.507 [2024-10-11 11:40:45.189656] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.769 passed 00:03:42.769 Test: mem map registration ...[2024-10-11 11:40:45.244933] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:42.769 [2024-10-11 11:40:45.244968] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:42.769 passed 00:03:42.769 Test: mem map adjacent registrations ...passed 00:03:42.769 00:03:42.769 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.769 suites 1 1 n/a 0 0 00:03:42.769 tests 4 4 4 0 0 00:03:42.769 asserts 152 152 152 0 n/a 00:03:42.769 00:03:42.769 Elapsed time = 0.191 seconds 00:03:42.769 00:03:42.769 real 0m0.206s 00:03:42.769 user 0m0.195s 00:03:42.769 sys 0m0.010s 00:03:42.769 11:40:45 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.769 11:40:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:42.769 ************************************ 00:03:42.769 END TEST env_memory 00:03:42.769 ************************************ 00:03:42.769 11:40:45 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.769 11:40:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.769 11:40:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.769 11:40:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.769 ************************************ 00:03:42.769 START TEST env_vtophys 00:03:42.769 ************************************ 00:03:42.769 11:40:45 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.769 EAL: lib.eal log level changed from notice to debug 00:03:42.769 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.769 EAL: Detected lcore 1 as core 1 on socket 0 00:03:42.769 EAL: Detected lcore 2 as core 2 on socket 0 00:03:42.769 EAL: Detected lcore 3 as core 3 on socket 0 00:03:42.769 EAL: Detected lcore 4 as core 4 on socket 0 00:03:42.769 EAL: Detected lcore 5 as core 5 on socket 0 00:03:42.769 EAL: Detected lcore 6 as core 6 on socket 0 00:03:42.770 EAL: Detected lcore 7 as core 7 on socket 0 00:03:42.770 EAL: Detected lcore 8 as core 8 on socket 0 00:03:42.770 EAL: Detected lcore 9 as core 9 on socket 0 00:03:42.770 EAL: Detected lcore 10 as core 10 on socket 0 00:03:42.770 EAL: Detected lcore 11 as core 11 on socket 0 00:03:42.770 EAL: Detected lcore 12 as core 12 on socket 0 00:03:42.770 EAL: Detected lcore 13 as core 13 on socket 0 00:03:42.770 EAL: Detected lcore 14 as core 14 on socket 0 00:03:42.770 EAL: Detected lcore 15 as core 15 on socket 0 00:03:42.770 EAL: Detected lcore 16 as core 16 on socket 0 00:03:42.770 EAL: Detected lcore 17 as core 17 on socket 0 00:03:42.770 EAL: Detected lcore 18 as core 18 on socket 0 00:03:42.770 EAL: Detected lcore 19 as core 19 on socket 0 00:03:42.770 EAL: Detected lcore 20 as core 20 on socket 0 00:03:42.770 EAL: Detected lcore 21 as core 21 on socket 0 00:03:42.770 EAL: Detected lcore 22 as core 22 on socket 0 00:03:42.770 EAL: Detected lcore 23 as core 23 on socket 0 00:03:42.770 EAL: Detected lcore 24 as core 24 on socket 0 00:03:42.770 EAL: Detected lcore 25 as core 25 on socket 0 00:03:42.770 EAL: Detected lcore 26 as core 26 on socket 0 00:03:42.770 EAL: Detected lcore 27 as core 27 on socket 0 00:03:42.770 EAL: Detected lcore 28 as core 28 on socket 0 00:03:42.770 EAL: Detected lcore 29 as core 29 on socket 0 00:03:42.770 EAL: Detected lcore 30 as core 30 on socket 0 00:03:42.770 EAL: Detected lcore 31 as core 31 on socket 0 00:03:42.770 EAL: Detected lcore 32 as core 32 on socket 0 00:03:42.770 EAL: Detected lcore 33 as core 33 on socket 0 00:03:42.770 EAL: Detected lcore 34 as core 34 on socket 0 00:03:42.770 EAL: Detected lcore 35 as core 35 on socket 0 00:03:42.770 EAL: Detected lcore 36 as core 0 on socket 1 00:03:42.770 EAL: Detected lcore 37 as core 1 on socket 1 00:03:42.770 EAL: Detected lcore 38 as core 2 on socket 1 00:03:42.770 EAL: Detected lcore 39 as core 3 on socket 1 00:03:42.770 EAL: Detected lcore 40 as core 4 on socket 1 00:03:42.770 EAL: Detected lcore 41 as core 5 on socket 1 00:03:42.770 EAL: Detected lcore 42 as core 6 on socket 1 00:03:42.770 EAL: Detected lcore 43 as core 7 on socket 1 00:03:42.770 EAL: Detected lcore 44 as core 8 on socket 1 00:03:42.770 EAL: Detected lcore 45 as core 9 on socket 1 00:03:42.770 EAL: Detected lcore 46 as core 10 on socket 1 00:03:42.770 EAL: Detected lcore 47 as core 11 on socket 1 00:03:42.770 EAL: Detected lcore 48 as core 12 on socket 1 00:03:42.770 EAL: Detected lcore 49 as core 13 on socket 1 00:03:42.770 EAL: Detected lcore 50 as core 14 on socket 1 00:03:42.770 EAL: Detected lcore 51 as core 15 on socket 1 00:03:42.770 EAL: Detected lcore 52 as core 16 on socket 1 00:03:42.770 EAL: Detected lcore 53 as core 17 on socket 1 00:03:42.770 EAL: Detected lcore 54 as core 18 on socket 1 00:03:42.770 EAL: Detected lcore 55 as core 19 on socket 1 00:03:42.770 EAL: Detected lcore 56 as core 20 on socket 1 00:03:42.770 EAL: Detected lcore 57 as core 21 on socket 1 00:03:42.770 EAL: Detected lcore 58 as core 22 on socket 1 00:03:42.770 EAL: Detected lcore 59 as core 23 on socket 1 00:03:42.770 EAL: Detected lcore 60 as core 24 on socket 1 00:03:42.770 EAL: Detected lcore 61 as core 25 on socket 1 00:03:42.770 EAL: Detected lcore 62 as core 26 on socket 1 00:03:42.770 EAL: Detected lcore 63 as core 27 on socket 1 00:03:42.770 EAL: Detected lcore 64 as core 28 on socket 1 00:03:42.770 EAL: Detected lcore 65 as core 29 on socket 1 00:03:42.770 EAL: Detected lcore 66 as core 30 on socket 1 00:03:42.770 EAL: Detected lcore 67 as core 31 on socket 1 00:03:42.770 EAL: Detected lcore 68 as core 32 on socket 1 00:03:42.770 EAL: Detected lcore 69 as core 33 on socket 1 00:03:42.770 EAL: Detected lcore 70 as core 34 on socket 1 00:03:42.770 EAL: Detected lcore 71 as core 35 on socket 1 00:03:42.770 EAL: Detected lcore 72 as core 0 on socket 0 00:03:42.770 EAL: Detected lcore 73 as core 1 on socket 0 00:03:42.770 EAL: Detected lcore 74 as core 2 on socket 0 00:03:42.770 EAL: Detected lcore 75 as core 3 on socket 0 00:03:42.770 EAL: Detected lcore 76 as core 4 on socket 0 00:03:42.770 EAL: Detected lcore 77 as core 5 on socket 0 00:03:42.770 EAL: Detected lcore 78 as core 6 on socket 0 00:03:42.770 EAL: Detected lcore 79 as core 7 on socket 0 00:03:42.770 EAL: Detected lcore 80 as core 8 on socket 0 00:03:42.770 EAL: Detected lcore 81 as core 9 on socket 0 00:03:42.770 EAL: Detected lcore 82 as core 10 on socket 0 00:03:42.770 EAL: Detected lcore 83 as core 11 on socket 0 00:03:42.770 EAL: Detected lcore 84 as core 12 on socket 0 00:03:42.770 EAL: Detected lcore 85 as core 13 on socket 0 00:03:42.770 EAL: Detected lcore 86 as core 14 on socket 0 00:03:42.770 EAL: Detected lcore 87 as core 15 on socket 0 00:03:42.770 EAL: Detected lcore 88 as core 16 on socket 0 00:03:42.770 EAL: Detected lcore 89 as core 17 on socket 0 00:03:42.770 EAL: Detected lcore 90 as core 18 on socket 0 00:03:42.770 EAL: Detected lcore 91 as core 19 on socket 0 00:03:42.770 EAL: Detected lcore 92 as core 20 on socket 0 00:03:42.770 EAL: Detected lcore 93 as core 21 on socket 0 00:03:42.770 EAL: Detected lcore 94 as core 22 on socket 0 00:03:42.770 EAL: Detected lcore 95 as core 23 on socket 0 00:03:42.770 EAL: Detected lcore 96 as core 24 on socket 0 00:03:42.770 EAL: Detected lcore 97 as core 25 on socket 0 00:03:42.770 EAL: Detected lcore 98 as core 26 on socket 0 00:03:42.770 EAL: Detected lcore 99 as core 27 on socket 0 00:03:42.770 EAL: Detected lcore 100 as core 28 on socket 0 00:03:42.770 EAL: Detected lcore 101 as core 29 on socket 0 00:03:42.770 EAL: Detected lcore 102 as core 30 on socket 0 00:03:42.770 EAL: Detected lcore 103 as core 31 on socket 0 00:03:42.770 EAL: Detected lcore 104 as core 32 on socket 0 00:03:42.770 EAL: Detected lcore 105 as core 33 on socket 0 00:03:42.770 EAL: Detected lcore 106 as core 34 on socket 0 00:03:42.770 EAL: Detected lcore 107 as core 35 on socket 0 00:03:42.770 EAL: Detected lcore 108 as core 0 on socket 1 00:03:42.770 EAL: Detected lcore 109 as core 1 on socket 1 00:03:42.770 EAL: Detected lcore 110 as core 2 on socket 1 00:03:42.770 EAL: Detected lcore 111 as core 3 on socket 1 00:03:42.770 EAL: Detected lcore 112 as core 4 on socket 1 00:03:42.770 EAL: Detected lcore 113 as core 5 on socket 1 00:03:42.770 EAL: Detected lcore 114 as core 6 on socket 1 00:03:42.770 EAL: Detected lcore 115 as core 7 on socket 1 00:03:42.770 EAL: Detected lcore 116 as core 8 on socket 1 00:03:42.770 EAL: Detected lcore 117 as core 9 on socket 1 00:03:42.770 EAL: Detected lcore 118 as core 10 on socket 1 00:03:42.770 EAL: Detected lcore 119 as core 11 on socket 1 00:03:42.770 EAL: Detected lcore 120 as core 12 on socket 1 00:03:42.770 EAL: Detected lcore 121 as core 13 on socket 1 00:03:42.770 EAL: Detected lcore 122 as core 14 on socket 1 00:03:42.770 EAL: Detected lcore 123 as core 15 on socket 1 00:03:42.770 EAL: Detected lcore 124 as core 16 on socket 1 00:03:42.770 EAL: Detected lcore 125 as core 17 on socket 1 00:03:42.770 EAL: Detected lcore 126 as core 18 on socket 1 00:03:42.770 EAL: Detected lcore 127 as core 19 on socket 1 00:03:42.770 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:42.770 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:42.770 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:42.770 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:42.770 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:42.770 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:42.770 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:42.770 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:42.770 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:42.770 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:42.770 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:42.770 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:42.770 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:42.770 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:42.770 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:42.770 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:42.770 EAL: Maximum logical cores by configuration: 128 00:03:42.770 EAL: Detected CPU lcores: 128 00:03:42.770 EAL: Detected NUMA nodes: 2 00:03:42.770 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:03:42.770 EAL: Detected shared linkage of DPDK 00:03:42.770 EAL: No shared files mode enabled, IPC will be disabled 00:03:42.770 EAL: Bus pci wants IOVA as 'DC' 00:03:42.770 EAL: Buses did not request a specific IOVA mode. 00:03:42.770 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:42.770 EAL: Selected IOVA mode 'VA' 00:03:42.770 EAL: Probing VFIO support... 00:03:42.770 EAL: IOMMU type 1 (Type 1) is supported 00:03:42.770 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:42.770 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:42.770 EAL: VFIO support initialized 00:03:42.770 EAL: Ask a virtual area of 0x2e000 bytes 00:03:42.770 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:42.770 EAL: Setting up physically contiguous memory... 00:03:42.770 EAL: Setting maximum number of open files to 524288 00:03:42.770 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:42.770 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:42.770 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:42.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.770 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:42.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.770 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:42.770 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:42.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.770 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:42.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.770 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:42.770 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:42.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.770 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:42.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.770 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:42.770 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:42.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.770 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:42.770 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.770 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:42.770 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:42.770 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:42.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.770 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:42.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.770 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:42.770 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:42.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.770 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:42.770 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.770 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.770 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:42.770 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:42.770 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.770 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:42.771 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.771 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.771 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:42.771 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:42.771 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.771 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:42.771 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.771 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.771 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:42.771 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:42.771 EAL: Hugepages will be freed exactly as allocated. 00:03:42.771 EAL: No shared files mode enabled, IPC is disabled 00:03:42.771 EAL: No shared files mode enabled, IPC is disabled 00:03:42.771 EAL: TSC frequency is ~2400000 KHz 00:03:42.771 EAL: Main lcore 0 is ready (tid=7f8332ff0a00;cpuset=[0]) 00:03:42.771 EAL: Trying to obtain current memory policy. 00:03:42.771 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.771 EAL: Restoring previous memory policy: 0 00:03:42.771 EAL: request: mp_malloc_sync 00:03:42.771 EAL: No shared files mode enabled, IPC is disabled 00:03:42.771 EAL: Heap on socket 0 was expanded by 2MB 00:03:42.771 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.033 00:03:43.033 00:03:43.033 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.033 http://cunit.sourceforge.net/ 00:03:43.033 00:03:43.033 00:03:43.033 Suite: components_suite 00:03:43.033 Test: vtophys_malloc_test ...passed 00:03:43.033 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.033 EAL: Restoring previous memory policy: 4 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was expanded by 4MB 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was shrunk by 4MB 00:03:43.033 EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.033 EAL: Restoring previous memory policy: 4 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was expanded by 6MB 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.033 EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.033 EAL: Restoring previous memory policy: 4 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.033 EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.033 EAL: Restoring previous memory policy: 4 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.033 EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.033 EAL: Restoring previous memory policy: 4 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.033 EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.033 EAL: Restoring previous memory policy: 4 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was expanded by 66MB 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was shrunk by 66MB 00:03:43.033 EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.033 EAL: Restoring previous memory policy: 4 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was expanded by 130MB 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was shrunk by 130MB 00:03:43.033 EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.033 EAL: Restoring previous memory policy: 4 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was expanded by 258MB 00:03:43.033 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.033 EAL: request: mp_malloc_sync 00:03:43.033 EAL: No shared files mode enabled, IPC is disabled 00:03:43.033 EAL: Heap on socket 0 was shrunk by 258MB 00:03:43.033 EAL: Trying to obtain current memory policy. 00:03:43.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.294 EAL: Restoring previous memory policy: 4 00:03:43.294 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.294 EAL: request: mp_malloc_sync 00:03:43.294 EAL: No shared files mode enabled, IPC is disabled 00:03:43.294 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.294 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.294 EAL: request: mp_malloc_sync 00:03:43.294 EAL: No shared files mode enabled, IPC is disabled 00:03:43.294 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.294 EAL: Trying to obtain current memory policy. 00:03:43.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.555 EAL: Restoring previous memory policy: 4 00:03:43.555 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.555 EAL: request: mp_malloc_sync 00:03:43.555 EAL: No shared files mode enabled, IPC is disabled 00:03:43.555 EAL: Heap on socket 0 was expanded by 1026MB 00:03:43.555 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.555 EAL: request: mp_malloc_sync 00:03:43.555 EAL: No shared files mode enabled, IPC is disabled 00:03:43.555 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:43.555 passed 00:03:43.555 00:03:43.555 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.555 suites 1 1 n/a 0 0 00:03:43.555 tests 2 2 2 0 0 00:03:43.555 asserts 497 497 497 0 n/a 00:03:43.555 00:03:43.555 Elapsed time = 0.687 seconds 00:03:43.555 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.555 EAL: request: mp_malloc_sync 00:03:43.555 EAL: No shared files mode enabled, IPC is disabled 00:03:43.555 EAL: Heap on socket 0 was shrunk by 2MB 00:03:43.555 EAL: No shared files mode enabled, IPC is disabled 00:03:43.555 EAL: No shared files mode enabled, IPC is disabled 00:03:43.555 EAL: No shared files mode enabled, IPC is disabled 00:03:43.555 00:03:43.555 real 0m0.828s 00:03:43.555 user 0m0.428s 00:03:43.555 sys 0m0.372s 00:03:43.555 11:40:46 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.555 11:40:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:43.555 ************************************ 00:03:43.555 END TEST env_vtophys 00:03:43.555 ************************************ 00:03:43.816 11:40:46 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:43.816 11:40:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.817 11:40:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.817 11:40:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.817 ************************************ 00:03:43.817 START TEST env_pci 00:03:43.817 ************************************ 00:03:43.817 11:40:46 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:43.817 00:03:43.817 00:03:43.817 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.817 http://cunit.sourceforge.net/ 00:03:43.817 00:03:43.817 00:03:43.817 Suite: pci 00:03:43.817 Test: pci_hook ...[2024-10-11 11:40:46.319263] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1683098 has claimed it 00:03:43.817 EAL: Cannot find device (10000:00:01.0) 00:03:43.817 EAL: Failed to attach device on primary process 00:03:43.817 passed 00:03:43.817 00:03:43.817 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.817 suites 1 1 n/a 0 0 00:03:43.817 tests 1 1 1 0 0 00:03:43.817 asserts 25 25 25 0 n/a 00:03:43.817 00:03:43.817 Elapsed time = 0.030 seconds 00:03:43.817 00:03:43.817 real 0m0.052s 00:03:43.817 user 0m0.019s 00:03:43.817 sys 0m0.032s 00:03:43.817 11:40:46 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.817 11:40:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:43.817 ************************************ 00:03:43.817 END TEST env_pci 00:03:43.817 ************************************ 00:03:43.817 11:40:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:43.817 11:40:46 env -- env/env.sh@15 -- # uname 00:03:43.817 11:40:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:43.817 11:40:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:43.817 11:40:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:43.817 11:40:46 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:43.817 11:40:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.817 11:40:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.817 ************************************ 00:03:43.817 START TEST env_dpdk_post_init 00:03:43.817 ************************************ 00:03:43.817 11:40:46 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:43.817 EAL: Detected CPU lcores: 128 00:03:43.817 EAL: Detected NUMA nodes: 2 00:03:43.817 EAL: Detected shared linkage of DPDK 00:03:43.817 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:43.817 EAL: Selected IOVA mode 'VA' 00:03:43.817 EAL: VFIO support initialized 00:03:44.078 EAL: Using IOMMU type 1 (Type 1) 00:03:48.289 Starting DPDK initialization... 00:03:48.289 Starting SPDK post initialization... 00:03:48.289 SPDK NVMe probe 00:03:48.289 Attaching to 0000:65:00.0 00:03:48.289 Attached to 0000:65:00.0 00:03:48.289 Cleaning up... 00:03:49.676 00:03:49.676 real 0m5.741s 00:03:49.676 user 0m0.184s 00:03:49.676 sys 0m0.104s 00:03:49.676 11:40:52 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.676 11:40:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:49.676 ************************************ 00:03:49.676 END TEST env_dpdk_post_init 00:03:49.676 ************************************ 00:03:49.676 11:40:52 env -- env/env.sh@26 -- # uname 00:03:49.676 11:40:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:49.676 11:40:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:49.676 11:40:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.676 11:40:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.676 11:40:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.676 ************************************ 00:03:49.676 START TEST env_mem_callbacks 00:03:49.676 ************************************ 00:03:49.676 11:40:52 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:49.676 EAL: Detected CPU lcores: 128 00:03:49.676 EAL: Detected NUMA nodes: 2 00:03:49.676 EAL: Detected shared linkage of DPDK 00:03:49.676 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.676 EAL: Selected IOVA mode 'VA' 00:03:49.676 EAL: VFIO support initialized 00:03:49.676 00:03:49.676 00:03:49.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.676 http://cunit.sourceforge.net/ 00:03:49.676 00:03:49.676 00:03:49.676 Suite: memory 00:03:49.676 Test: test ... 00:03:49.676 register 0x200000200000 2097152 00:03:49.676 malloc 3145728 00:03:49.676 register 0x200000400000 4194304 00:03:49.676 buf 0x200000500000 len 3145728 PASSED 00:03:49.676 malloc 64 00:03:49.676 buf 0x2000004fff40 len 64 PASSED 00:03:49.676 malloc 4194304 00:03:49.676 register 0x200000800000 6291456 00:03:49.676 buf 0x200000a00000 len 4194304 PASSED 00:03:49.676 free 0x200000500000 3145728 00:03:49.676 free 0x2000004fff40 64 00:03:49.676 unregister 0x200000400000 4194304 PASSED 00:03:49.676 free 0x200000a00000 4194304 00:03:49.676 unregister 0x200000800000 6291456 PASSED 00:03:49.676 malloc 8388608 00:03:49.676 register 0x200000400000 10485760 00:03:49.676 buf 0x200000600000 len 8388608 PASSED 00:03:49.676 free 0x200000600000 8388608 00:03:49.676 unregister 0x200000400000 10485760 PASSED 00:03:49.676 passed 00:03:49.676 00:03:49.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.676 suites 1 1 n/a 0 0 00:03:49.676 tests 1 1 1 0 0 00:03:49.676 asserts 15 15 15 0 n/a 00:03:49.676 00:03:49.676 Elapsed time = 0.010 seconds 00:03:49.676 00:03:49.676 real 0m0.070s 00:03:49.676 user 0m0.020s 00:03:49.676 sys 0m0.048s 00:03:49.676 11:40:52 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.676 11:40:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:49.676 ************************************ 00:03:49.676 END TEST env_mem_callbacks 00:03:49.676 ************************************ 00:03:49.937 00:03:49.937 real 0m7.505s 00:03:49.937 user 0m1.094s 00:03:49.937 sys 0m0.965s 00:03:49.937 11:40:52 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.937 11:40:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.937 ************************************ 00:03:49.937 END TEST env 00:03:49.937 ************************************ 00:03:49.937 11:40:52 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:49.937 11:40:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.937 11:40:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.937 11:40:52 -- common/autotest_common.sh@10 -- # set +x 00:03:49.937 ************************************ 00:03:49.937 START TEST rpc 00:03:49.937 ************************************ 00:03:49.937 11:40:52 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:49.937 * Looking for test storage... 00:03:49.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.937 11:40:52 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:49.937 11:40:52 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:49.937 11:40:52 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:50.199 11:40:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.199 11:40:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.199 11:40:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.199 11:40:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.199 11:40:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.199 11:40:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.199 11:40:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.199 11:40:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.199 11:40:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.199 11:40:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.199 11:40:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.199 11:40:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.199 11:40:52 rpc -- scripts/common.sh@345 -- # : 1 00:03:50.199 11:40:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.199 11:40:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.199 11:40:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.199 11:40:52 rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.199 11:40:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.199 11:40:52 rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.199 11:40:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.199 11:40:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.199 11:40:52 rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.199 11:40:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.199 11:40:52 rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.199 11:40:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.199 11:40:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.199 11:40:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.199 11:40:52 rpc -- scripts/common.sh@368 -- # return 0 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:50.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.199 --rc genhtml_branch_coverage=1 00:03:50.199 --rc genhtml_function_coverage=1 00:03:50.199 --rc genhtml_legend=1 00:03:50.199 --rc geninfo_all_blocks=1 00:03:50.199 --rc geninfo_unexecuted_blocks=1 00:03:50.199 00:03:50.199 ' 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:50.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.199 --rc genhtml_branch_coverage=1 00:03:50.199 --rc genhtml_function_coverage=1 00:03:50.199 --rc genhtml_legend=1 00:03:50.199 --rc geninfo_all_blocks=1 00:03:50.199 --rc geninfo_unexecuted_blocks=1 00:03:50.199 00:03:50.199 ' 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:50.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.199 --rc genhtml_branch_coverage=1 00:03:50.199 --rc genhtml_function_coverage=1 00:03:50.199 --rc genhtml_legend=1 00:03:50.199 --rc geninfo_all_blocks=1 00:03:50.199 --rc geninfo_unexecuted_blocks=1 00:03:50.199 00:03:50.199 ' 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:50.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.199 --rc genhtml_branch_coverage=1 00:03:50.199 --rc genhtml_function_coverage=1 00:03:50.199 --rc genhtml_legend=1 00:03:50.199 --rc geninfo_all_blocks=1 00:03:50.199 --rc geninfo_unexecuted_blocks=1 00:03:50.199 00:03:50.199 ' 00:03:50.199 11:40:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1684360 00:03:50.199 11:40:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.199 11:40:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1684360 00:03:50.199 11:40:52 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@831 -- # '[' -z 1684360 ']' 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:50.199 11:40:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.199 [2024-10-11 11:40:52.728769] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:03:50.199 [2024-10-11 11:40:52.728842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684360 ] 00:03:50.199 [2024-10-11 11:40:52.814782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.199 [2024-10-11 11:40:52.867647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:50.199 [2024-10-11 11:40:52.867700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1684360' to capture a snapshot of events at runtime. 00:03:50.199 [2024-10-11 11:40:52.867709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:50.199 [2024-10-11 11:40:52.867716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:50.200 [2024-10-11 11:40:52.867722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1684360 for offline analysis/debug. 00:03:50.200 [2024-10-11 11:40:52.868565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.144 11:40:53 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:51.144 11:40:53 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:51.144 11:40:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.144 11:40:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.144 11:40:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:51.144 11:40:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:51.144 11:40:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.144 11:40:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.144 11:40:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.144 ************************************ 00:03:51.144 START TEST rpc_integrity 00:03:51.144 ************************************ 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.144 { 00:03:51.144 "name": "Malloc0", 00:03:51.144 "aliases": [ 00:03:51.144 "0d4de68f-072a-42ea-9c75-881e6a6eb30e" 00:03:51.144 ], 00:03:51.144 "product_name": "Malloc disk", 00:03:51.144 "block_size": 512, 00:03:51.144 "num_blocks": 16384, 00:03:51.144 "uuid": "0d4de68f-072a-42ea-9c75-881e6a6eb30e", 00:03:51.144 "assigned_rate_limits": { 00:03:51.144 "rw_ios_per_sec": 0, 00:03:51.144 "rw_mbytes_per_sec": 0, 00:03:51.144 "r_mbytes_per_sec": 0, 00:03:51.144 "w_mbytes_per_sec": 0 00:03:51.144 }, 00:03:51.144 "claimed": false, 00:03:51.144 "zoned": false, 00:03:51.144 "supported_io_types": { 00:03:51.144 "read": true, 00:03:51.144 "write": true, 00:03:51.144 "unmap": true, 00:03:51.144 "flush": true, 00:03:51.144 "reset": true, 00:03:51.144 "nvme_admin": false, 00:03:51.144 "nvme_io": false, 00:03:51.144 "nvme_io_md": false, 00:03:51.144 "write_zeroes": true, 00:03:51.144 "zcopy": true, 00:03:51.144 "get_zone_info": false, 00:03:51.144 "zone_management": false, 00:03:51.144 "zone_append": false, 00:03:51.144 "compare": false, 00:03:51.144 "compare_and_write": false, 00:03:51.144 "abort": true, 00:03:51.144 "seek_hole": false, 00:03:51.144 "seek_data": false, 00:03:51.144 "copy": true, 00:03:51.144 "nvme_iov_md": false 00:03:51.144 }, 00:03:51.144 "memory_domains": [ 00:03:51.144 { 00:03:51.144 "dma_device_id": "system", 00:03:51.144 "dma_device_type": 1 00:03:51.144 }, 00:03:51.144 { 00:03:51.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.144 "dma_device_type": 2 00:03:51.144 } 00:03:51.144 ], 00:03:51.144 "driver_specific": {} 00:03:51.144 } 00:03:51.144 ]' 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.144 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.144 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.144 [2024-10-11 11:40:53.731192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:51.144 [2024-10-11 11:40:53.731238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.144 [2024-10-11 11:40:53.731259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1467080 00:03:51.144 [2024-10-11 11:40:53.731267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.144 [2024-10-11 11:40:53.732838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.145 [2024-10-11 11:40:53.732875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.145 Passthru0 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.145 { 00:03:51.145 "name": "Malloc0", 00:03:51.145 "aliases": [ 00:03:51.145 "0d4de68f-072a-42ea-9c75-881e6a6eb30e" 00:03:51.145 ], 00:03:51.145 "product_name": "Malloc disk", 00:03:51.145 "block_size": 512, 00:03:51.145 "num_blocks": 16384, 00:03:51.145 "uuid": "0d4de68f-072a-42ea-9c75-881e6a6eb30e", 00:03:51.145 "assigned_rate_limits": { 00:03:51.145 "rw_ios_per_sec": 0, 00:03:51.145 "rw_mbytes_per_sec": 0, 00:03:51.145 "r_mbytes_per_sec": 0, 00:03:51.145 "w_mbytes_per_sec": 0 00:03:51.145 }, 00:03:51.145 "claimed": true, 00:03:51.145 "claim_type": "exclusive_write", 00:03:51.145 "zoned": false, 00:03:51.145 "supported_io_types": { 00:03:51.145 "read": true, 00:03:51.145 "write": true, 00:03:51.145 "unmap": true, 00:03:51.145 "flush": true, 00:03:51.145 "reset": true, 00:03:51.145 "nvme_admin": false, 00:03:51.145 "nvme_io": false, 00:03:51.145 "nvme_io_md": false, 00:03:51.145 "write_zeroes": true, 00:03:51.145 "zcopy": true, 00:03:51.145 "get_zone_info": false, 00:03:51.145 "zone_management": false, 00:03:51.145 "zone_append": false, 00:03:51.145 "compare": false, 00:03:51.145 "compare_and_write": false, 00:03:51.145 "abort": true, 00:03:51.145 "seek_hole": false, 00:03:51.145 "seek_data": false, 00:03:51.145 "copy": true, 00:03:51.145 "nvme_iov_md": false 00:03:51.145 }, 00:03:51.145 "memory_domains": [ 00:03:51.145 { 00:03:51.145 "dma_device_id": "system", 00:03:51.145 "dma_device_type": 1 00:03:51.145 }, 00:03:51.145 { 00:03:51.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.145 "dma_device_type": 2 00:03:51.145 } 00:03:51.145 ], 00:03:51.145 "driver_specific": {} 00:03:51.145 }, 00:03:51.145 { 00:03:51.145 "name": "Passthru0", 00:03:51.145 "aliases": [ 00:03:51.145 "61861c53-a78f-59b7-b89d-093125c55da9" 00:03:51.145 ], 00:03:51.145 "product_name": "passthru", 00:03:51.145 "block_size": 512, 00:03:51.145 "num_blocks": 16384, 00:03:51.145 "uuid": "61861c53-a78f-59b7-b89d-093125c55da9", 00:03:51.145 "assigned_rate_limits": { 00:03:51.145 "rw_ios_per_sec": 0, 00:03:51.145 "rw_mbytes_per_sec": 0, 00:03:51.145 "r_mbytes_per_sec": 0, 00:03:51.145 "w_mbytes_per_sec": 0 00:03:51.145 }, 00:03:51.145 "claimed": false, 00:03:51.145 "zoned": false, 00:03:51.145 "supported_io_types": { 00:03:51.145 "read": true, 00:03:51.145 "write": true, 00:03:51.145 "unmap": true, 00:03:51.145 "flush": true, 00:03:51.145 "reset": true, 00:03:51.145 "nvme_admin": false, 00:03:51.145 "nvme_io": false, 00:03:51.145 "nvme_io_md": false, 00:03:51.145 "write_zeroes": true, 00:03:51.145 "zcopy": true, 00:03:51.145 "get_zone_info": false, 00:03:51.145 "zone_management": false, 00:03:51.145 "zone_append": false, 00:03:51.145 "compare": false, 00:03:51.145 "compare_and_write": false, 00:03:51.145 "abort": true, 00:03:51.145 "seek_hole": false, 00:03:51.145 "seek_data": false, 00:03:51.145 "copy": true, 00:03:51.145 "nvme_iov_md": false 00:03:51.145 }, 00:03:51.145 "memory_domains": [ 00:03:51.145 { 00:03:51.145 "dma_device_id": "system", 00:03:51.145 "dma_device_type": 1 00:03:51.145 }, 00:03:51.145 { 00:03:51.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.145 "dma_device_type": 2 00:03:51.145 } 00:03:51.145 ], 00:03:51.145 "driver_specific": { 00:03:51.145 "passthru": { 00:03:51.145 "name": "Passthru0", 00:03:51.145 "base_bdev_name": "Malloc0" 00:03:51.145 } 00:03:51.145 } 00:03:51.145 } 00:03:51.145 ]' 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.145 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.145 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.407 11:40:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.407 00:03:51.407 real 0m0.296s 00:03:51.407 user 0m0.179s 00:03:51.407 sys 0m0.049s 00:03:51.407 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.407 11:40:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.407 ************************************ 00:03:51.407 END TEST rpc_integrity 00:03:51.407 ************************************ 00:03:51.407 11:40:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:51.407 11:40:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.407 11:40:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.407 11:40:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.407 ************************************ 00:03:51.407 START TEST rpc_plugins 00:03:51.407 ************************************ 00:03:51.407 11:40:53 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:51.407 11:40:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:51.407 11:40:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.407 11:40:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.407 11:40:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.407 11:40:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:51.407 11:40:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:51.407 11:40:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.407 11:40:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.407 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.407 11:40:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:51.407 { 00:03:51.407 "name": "Malloc1", 00:03:51.407 "aliases": [ 00:03:51.407 "cc70aee0-4384-40a8-8417-afe0c14fb65a" 00:03:51.407 ], 00:03:51.407 "product_name": "Malloc disk", 00:03:51.407 "block_size": 4096, 00:03:51.407 "num_blocks": 256, 00:03:51.407 "uuid": "cc70aee0-4384-40a8-8417-afe0c14fb65a", 00:03:51.407 "assigned_rate_limits": { 00:03:51.407 "rw_ios_per_sec": 0, 00:03:51.407 "rw_mbytes_per_sec": 0, 00:03:51.407 "r_mbytes_per_sec": 0, 00:03:51.407 "w_mbytes_per_sec": 0 00:03:51.407 }, 00:03:51.407 "claimed": false, 00:03:51.407 "zoned": false, 00:03:51.407 "supported_io_types": { 00:03:51.407 "read": true, 00:03:51.407 "write": true, 00:03:51.407 "unmap": true, 00:03:51.407 "flush": true, 00:03:51.407 "reset": true, 00:03:51.407 "nvme_admin": false, 00:03:51.407 "nvme_io": false, 00:03:51.407 "nvme_io_md": false, 00:03:51.407 "write_zeroes": true, 00:03:51.407 "zcopy": true, 00:03:51.407 "get_zone_info": false, 00:03:51.407 "zone_management": false, 00:03:51.407 "zone_append": false, 00:03:51.407 "compare": false, 00:03:51.407 "compare_and_write": false, 00:03:51.407 "abort": true, 00:03:51.407 "seek_hole": false, 00:03:51.407 "seek_data": false, 00:03:51.407 "copy": true, 00:03:51.407 "nvme_iov_md": false 00:03:51.407 }, 00:03:51.407 "memory_domains": [ 00:03:51.407 { 00:03:51.407 "dma_device_id": "system", 00:03:51.407 "dma_device_type": 1 00:03:51.407 }, 00:03:51.407 { 00:03:51.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.407 "dma_device_type": 2 00:03:51.407 } 00:03:51.407 ], 00:03:51.407 "driver_specific": {} 00:03:51.407 } 00:03:51.407 ]' 00:03:51.407 11:40:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:51.407 11:40:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:51.408 11:40:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:51.408 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.408 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.408 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.408 11:40:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:51.408 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.408 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.408 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.408 11:40:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:51.408 11:40:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:51.669 11:40:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:51.669 00:03:51.669 real 0m0.143s 00:03:51.669 user 0m0.088s 00:03:51.669 sys 0m0.021s 00:03:51.669 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.669 11:40:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.669 ************************************ 00:03:51.669 END TEST rpc_plugins 00:03:51.669 ************************************ 00:03:51.669 11:40:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:51.669 11:40:54 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.669 11:40:54 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.669 11:40:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.669 ************************************ 00:03:51.669 START TEST rpc_trace_cmd_test 00:03:51.669 ************************************ 00:03:51.669 11:40:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:51.669 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:51.669 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:51.669 11:40:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.669 11:40:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.669 11:40:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.669 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:51.669 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1684360", 00:03:51.669 "tpoint_group_mask": "0x8", 00:03:51.669 "iscsi_conn": { 00:03:51.669 "mask": "0x2", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "scsi": { 00:03:51.669 "mask": "0x4", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "bdev": { 00:03:51.669 "mask": "0x8", 00:03:51.669 "tpoint_mask": "0xffffffffffffffff" 00:03:51.669 }, 00:03:51.669 "nvmf_rdma": { 00:03:51.669 "mask": "0x10", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "nvmf_tcp": { 00:03:51.669 "mask": "0x20", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "ftl": { 00:03:51.669 "mask": "0x40", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "blobfs": { 00:03:51.669 "mask": "0x80", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "dsa": { 00:03:51.669 "mask": "0x200", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "thread": { 00:03:51.669 "mask": "0x400", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "nvme_pcie": { 00:03:51.669 "mask": "0x800", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "iaa": { 00:03:51.669 "mask": "0x1000", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "nvme_tcp": { 00:03:51.669 "mask": "0x2000", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "bdev_nvme": { 00:03:51.669 "mask": "0x4000", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "sock": { 00:03:51.669 "mask": "0x8000", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.669 "blob": { 00:03:51.669 "mask": "0x10000", 00:03:51.669 "tpoint_mask": "0x0" 00:03:51.669 }, 00:03:51.670 "bdev_raid": { 00:03:51.670 "mask": "0x20000", 00:03:51.670 "tpoint_mask": "0x0" 00:03:51.670 }, 00:03:51.670 "scheduler": { 00:03:51.670 "mask": "0x40000", 00:03:51.670 "tpoint_mask": "0x0" 00:03:51.670 } 00:03:51.670 }' 00:03:51.670 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:51.670 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:51.670 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:51.670 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:51.670 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:51.670 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:51.670 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:51.931 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:51.931 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:51.931 11:40:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:51.931 00:03:51.931 real 0m0.259s 00:03:51.931 user 0m0.218s 00:03:51.931 sys 0m0.030s 00:03:51.931 11:40:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.931 11:40:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.931 ************************************ 00:03:51.931 END TEST rpc_trace_cmd_test 00:03:51.931 ************************************ 00:03:51.931 11:40:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:51.931 11:40:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:51.931 11:40:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:51.931 11:40:54 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.931 11:40:54 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.931 11:40:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.931 ************************************ 00:03:51.931 START TEST rpc_daemon_integrity 00:03:51.931 ************************************ 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.931 { 00:03:51.931 "name": "Malloc2", 00:03:51.931 "aliases": [ 00:03:51.931 "f84fb429-0ca4-47af-bde4-a7e74fb2c1a8" 00:03:51.931 ], 00:03:51.931 "product_name": "Malloc disk", 00:03:51.931 "block_size": 512, 00:03:51.931 "num_blocks": 16384, 00:03:51.931 "uuid": "f84fb429-0ca4-47af-bde4-a7e74fb2c1a8", 00:03:51.931 "assigned_rate_limits": { 00:03:51.931 "rw_ios_per_sec": 0, 00:03:51.931 "rw_mbytes_per_sec": 0, 00:03:51.931 "r_mbytes_per_sec": 0, 00:03:51.931 "w_mbytes_per_sec": 0 00:03:51.931 }, 00:03:51.931 "claimed": false, 00:03:51.931 "zoned": false, 00:03:51.931 "supported_io_types": { 00:03:51.931 "read": true, 00:03:51.931 "write": true, 00:03:51.931 "unmap": true, 00:03:51.931 "flush": true, 00:03:51.931 "reset": true, 00:03:51.931 "nvme_admin": false, 00:03:51.931 "nvme_io": false, 00:03:51.931 "nvme_io_md": false, 00:03:51.931 "write_zeroes": true, 00:03:51.931 "zcopy": true, 00:03:51.931 "get_zone_info": false, 00:03:51.931 "zone_management": false, 00:03:51.931 "zone_append": false, 00:03:51.931 "compare": false, 00:03:51.931 "compare_and_write": false, 00:03:51.931 "abort": true, 00:03:51.931 "seek_hole": false, 00:03:51.931 "seek_data": false, 00:03:51.931 "copy": true, 00:03:51.931 "nvme_iov_md": false 00:03:51.931 }, 00:03:51.931 "memory_domains": [ 00:03:51.931 { 00:03:51.931 "dma_device_id": "system", 00:03:51.931 "dma_device_type": 1 00:03:51.931 }, 00:03:51.931 { 00:03:51.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.931 "dma_device_type": 2 00:03:51.931 } 00:03:51.931 ], 00:03:51.931 "driver_specific": {} 00:03:51.931 } 00:03:51.931 ]' 00:03:51.931 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.193 [2024-10-11 11:40:54.677804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:52.193 [2024-10-11 11:40:54.677848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:52.193 [2024-10-11 11:40:54.677867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1467300 00:03:52.193 [2024-10-11 11:40:54.677875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:52.193 [2024-10-11 11:40:54.679346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:52.193 [2024-10-11 11:40:54.679383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:52.193 Passthru0 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.193 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:52.193 { 00:03:52.193 "name": "Malloc2", 00:03:52.193 "aliases": [ 00:03:52.193 "f84fb429-0ca4-47af-bde4-a7e74fb2c1a8" 00:03:52.193 ], 00:03:52.193 "product_name": "Malloc disk", 00:03:52.193 "block_size": 512, 00:03:52.193 "num_blocks": 16384, 00:03:52.193 "uuid": "f84fb429-0ca4-47af-bde4-a7e74fb2c1a8", 00:03:52.193 "assigned_rate_limits": { 00:03:52.193 "rw_ios_per_sec": 0, 00:03:52.193 "rw_mbytes_per_sec": 0, 00:03:52.193 "r_mbytes_per_sec": 0, 00:03:52.193 "w_mbytes_per_sec": 0 00:03:52.193 }, 00:03:52.193 "claimed": true, 00:03:52.193 "claim_type": "exclusive_write", 00:03:52.193 "zoned": false, 00:03:52.193 "supported_io_types": { 00:03:52.193 "read": true, 00:03:52.193 "write": true, 00:03:52.193 "unmap": true, 00:03:52.193 "flush": true, 00:03:52.193 "reset": true, 00:03:52.193 "nvme_admin": false, 00:03:52.193 "nvme_io": false, 00:03:52.193 "nvme_io_md": false, 00:03:52.193 "write_zeroes": true, 00:03:52.193 "zcopy": true, 00:03:52.193 "get_zone_info": false, 00:03:52.193 "zone_management": false, 00:03:52.193 "zone_append": false, 00:03:52.193 "compare": false, 00:03:52.193 "compare_and_write": false, 00:03:52.193 "abort": true, 00:03:52.193 "seek_hole": false, 00:03:52.193 "seek_data": false, 00:03:52.193 "copy": true, 00:03:52.193 "nvme_iov_md": false 00:03:52.193 }, 00:03:52.193 "memory_domains": [ 00:03:52.193 { 00:03:52.193 "dma_device_id": "system", 00:03:52.193 "dma_device_type": 1 00:03:52.193 }, 00:03:52.193 { 00:03:52.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.193 "dma_device_type": 2 00:03:52.193 } 00:03:52.193 ], 00:03:52.193 "driver_specific": {} 00:03:52.193 }, 00:03:52.193 { 00:03:52.193 "name": "Passthru0", 00:03:52.193 "aliases": [ 00:03:52.193 "db521b83-943b-5d83-bb24-f2e27108c5ec" 00:03:52.193 ], 00:03:52.193 "product_name": "passthru", 00:03:52.193 "block_size": 512, 00:03:52.193 "num_blocks": 16384, 00:03:52.193 "uuid": "db521b83-943b-5d83-bb24-f2e27108c5ec", 00:03:52.193 "assigned_rate_limits": { 00:03:52.193 "rw_ios_per_sec": 0, 00:03:52.193 "rw_mbytes_per_sec": 0, 00:03:52.193 "r_mbytes_per_sec": 0, 00:03:52.193 "w_mbytes_per_sec": 0 00:03:52.193 }, 00:03:52.193 "claimed": false, 00:03:52.193 "zoned": false, 00:03:52.193 "supported_io_types": { 00:03:52.193 "read": true, 00:03:52.193 "write": true, 00:03:52.193 "unmap": true, 00:03:52.193 "flush": true, 00:03:52.193 "reset": true, 00:03:52.193 "nvme_admin": false, 00:03:52.193 "nvme_io": false, 00:03:52.193 "nvme_io_md": false, 00:03:52.193 "write_zeroes": true, 00:03:52.193 "zcopy": true, 00:03:52.193 "get_zone_info": false, 00:03:52.193 "zone_management": false, 00:03:52.193 "zone_append": false, 00:03:52.193 "compare": false, 00:03:52.193 "compare_and_write": false, 00:03:52.193 "abort": true, 00:03:52.193 "seek_hole": false, 00:03:52.193 "seek_data": false, 00:03:52.193 "copy": true, 00:03:52.193 "nvme_iov_md": false 00:03:52.193 }, 00:03:52.193 "memory_domains": [ 00:03:52.193 { 00:03:52.193 "dma_device_id": "system", 00:03:52.194 "dma_device_type": 1 00:03:52.194 }, 00:03:52.194 { 00:03:52.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.194 "dma_device_type": 2 00:03:52.194 } 00:03:52.194 ], 00:03:52.194 "driver_specific": { 00:03:52.194 "passthru": { 00:03:52.194 "name": "Passthru0", 00:03:52.194 "base_bdev_name": "Malloc2" 00:03:52.194 } 00:03:52.194 } 00:03:52.194 } 00:03:52.194 ]' 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:52.194 00:03:52.194 real 0m0.303s 00:03:52.194 user 0m0.184s 00:03:52.194 sys 0m0.051s 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.194 11:40:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.194 ************************************ 00:03:52.194 END TEST rpc_daemon_integrity 00:03:52.194 ************************************ 00:03:52.194 11:40:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:52.194 11:40:54 rpc -- rpc/rpc.sh@84 -- # killprocess 1684360 00:03:52.194 11:40:54 rpc -- common/autotest_common.sh@950 -- # '[' -z 1684360 ']' 00:03:52.194 11:40:54 rpc -- common/autotest_common.sh@954 -- # kill -0 1684360 00:03:52.194 11:40:54 rpc -- common/autotest_common.sh@955 -- # uname 00:03:52.194 11:40:54 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:52.194 11:40:54 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1684360 00:03:52.455 11:40:54 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:52.455 11:40:54 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:52.455 11:40:54 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1684360' 00:03:52.455 killing process with pid 1684360 00:03:52.455 11:40:54 rpc -- common/autotest_common.sh@969 -- # kill 1684360 00:03:52.455 11:40:54 rpc -- common/autotest_common.sh@974 -- # wait 1684360 00:03:52.717 00:03:52.717 real 0m2.718s 00:03:52.717 user 0m3.466s 00:03:52.717 sys 0m0.841s 00:03:52.717 11:40:55 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.717 11:40:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.717 ************************************ 00:03:52.717 END TEST rpc 00:03:52.717 ************************************ 00:03:52.717 11:40:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:52.717 11:40:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.717 11:40:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.717 11:40:55 -- common/autotest_common.sh@10 -- # set +x 00:03:52.717 ************************************ 00:03:52.717 START TEST skip_rpc 00:03:52.717 ************************************ 00:03:52.717 11:40:55 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:52.717 * Looking for test storage... 00:03:52.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:52.717 11:40:55 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:52.717 11:40:55 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:52.717 11:40:55 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:52.978 11:40:55 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:52.978 11:40:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.979 11:40:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:52.979 11:40:55 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.979 11:40:55 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.979 --rc genhtml_branch_coverage=1 00:03:52.979 --rc genhtml_function_coverage=1 00:03:52.979 --rc genhtml_legend=1 00:03:52.979 --rc geninfo_all_blocks=1 00:03:52.979 --rc geninfo_unexecuted_blocks=1 00:03:52.979 00:03:52.979 ' 00:03:52.979 11:40:55 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.979 --rc genhtml_branch_coverage=1 00:03:52.979 --rc genhtml_function_coverage=1 00:03:52.979 --rc genhtml_legend=1 00:03:52.979 --rc geninfo_all_blocks=1 00:03:52.979 --rc geninfo_unexecuted_blocks=1 00:03:52.979 00:03:52.979 ' 00:03:52.979 11:40:55 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.979 --rc genhtml_branch_coverage=1 00:03:52.979 --rc genhtml_function_coverage=1 00:03:52.979 --rc genhtml_legend=1 00:03:52.979 --rc geninfo_all_blocks=1 00:03:52.979 --rc geninfo_unexecuted_blocks=1 00:03:52.979 00:03:52.979 ' 00:03:52.979 11:40:55 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.979 --rc genhtml_branch_coverage=1 00:03:52.979 --rc genhtml_function_coverage=1 00:03:52.979 --rc genhtml_legend=1 00:03:52.979 --rc geninfo_all_blocks=1 00:03:52.979 --rc geninfo_unexecuted_blocks=1 00:03:52.979 00:03:52.979 ' 00:03:52.979 11:40:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:52.979 11:40:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.979 11:40:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:52.979 11:40:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.979 11:40:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.979 11:40:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.979 ************************************ 00:03:52.979 START TEST skip_rpc 00:03:52.979 ************************************ 00:03:52.979 11:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:52.979 11:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1685208 00:03:52.979 11:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.979 11:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:52.979 11:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:52.979 [2024-10-11 11:40:55.560835] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:03:52.979 [2024-10-11 11:40:55.560892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685208 ] 00:03:52.979 [2024-10-11 11:40:55.643241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.240 [2024-10-11 11:40:55.695229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1685208 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1685208 ']' 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1685208 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685208 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685208' 00:03:58.532 killing process with pid 1685208 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1685208 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1685208 00:03:58.532 00:03:58.532 real 0m5.262s 00:03:58.532 user 0m5.005s 00:03:58.532 sys 0m0.300s 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.532 11:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.532 ************************************ 00:03:58.532 END TEST skip_rpc 00:03:58.532 ************************************ 00:03:58.532 11:41:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:58.532 11:41:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.532 11:41:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.532 11:41:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.532 ************************************ 00:03:58.532 START TEST skip_rpc_with_json 00:03:58.532 ************************************ 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1686249 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1686249 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1686249 ']' 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:58.532 11:41:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.532 [2024-10-11 11:41:00.899016] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:03:58.532 [2024-10-11 11:41:00.899075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686249 ] 00:03:58.532 [2024-10-11 11:41:00.975289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.532 [2024-10-11 11:41:01.007802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.104 [2024-10-11 11:41:01.684252] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:59.104 request: 00:03:59.104 { 00:03:59.104 "trtype": "tcp", 00:03:59.104 "method": "nvmf_get_transports", 00:03:59.104 "req_id": 1 00:03:59.104 } 00:03:59.104 Got JSON-RPC error response 00:03:59.104 response: 00:03:59.104 { 00:03:59.104 "code": -19, 00:03:59.104 "message": "No such device" 00:03:59.104 } 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.104 [2024-10-11 11:41:01.696353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.104 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.365 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.365 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:59.365 { 00:03:59.365 "subsystems": [ 00:03:59.365 { 00:03:59.365 "subsystem": "fsdev", 00:03:59.365 "config": [ 00:03:59.365 { 00:03:59.365 "method": "fsdev_set_opts", 00:03:59.365 "params": { 00:03:59.365 "fsdev_io_pool_size": 65535, 00:03:59.365 "fsdev_io_cache_size": 256 00:03:59.365 } 00:03:59.365 } 00:03:59.365 ] 00:03:59.365 }, 00:03:59.365 { 00:03:59.365 "subsystem": "vfio_user_target", 00:03:59.365 "config": null 00:03:59.365 }, 00:03:59.365 { 00:03:59.365 "subsystem": "keyring", 00:03:59.365 "config": [] 00:03:59.365 }, 00:03:59.365 { 00:03:59.365 "subsystem": "iobuf", 00:03:59.365 "config": [ 00:03:59.365 { 00:03:59.365 "method": "iobuf_set_options", 00:03:59.365 "params": { 00:03:59.365 "small_pool_count": 8192, 00:03:59.365 "large_pool_count": 1024, 00:03:59.365 "small_bufsize": 8192, 00:03:59.365 "large_bufsize": 135168 00:03:59.365 } 00:03:59.365 } 00:03:59.365 ] 00:03:59.365 }, 00:03:59.365 { 00:03:59.365 "subsystem": "sock", 00:03:59.365 "config": [ 00:03:59.365 { 00:03:59.365 "method": "sock_set_default_impl", 00:03:59.365 "params": { 00:03:59.365 "impl_name": "posix" 00:03:59.365 } 00:03:59.365 }, 00:03:59.365 { 00:03:59.365 "method": "sock_impl_set_options", 00:03:59.365 "params": { 00:03:59.365 "impl_name": "ssl", 00:03:59.365 "recv_buf_size": 4096, 00:03:59.365 "send_buf_size": 4096, 00:03:59.365 "enable_recv_pipe": true, 00:03:59.365 "enable_quickack": false, 00:03:59.365 "enable_placement_id": 0, 00:03:59.365 "enable_zerocopy_send_server": true, 00:03:59.365 "enable_zerocopy_send_client": false, 00:03:59.365 "zerocopy_threshold": 0, 00:03:59.365 "tls_version": 0, 00:03:59.365 "enable_ktls": false 00:03:59.365 } 00:03:59.365 }, 00:03:59.365 { 00:03:59.365 "method": "sock_impl_set_options", 00:03:59.365 "params": { 00:03:59.365 "impl_name": "posix", 00:03:59.365 "recv_buf_size": 2097152, 00:03:59.365 "send_buf_size": 2097152, 00:03:59.365 "enable_recv_pipe": true, 00:03:59.365 "enable_quickack": false, 00:03:59.365 "enable_placement_id": 0, 00:03:59.365 "enable_zerocopy_send_server": true, 00:03:59.365 "enable_zerocopy_send_client": false, 00:03:59.365 "zerocopy_threshold": 0, 00:03:59.365 "tls_version": 0, 00:03:59.365 "enable_ktls": false 00:03:59.365 } 00:03:59.365 } 00:03:59.365 ] 00:03:59.365 }, 00:03:59.365 { 00:03:59.365 "subsystem": "vmd", 00:03:59.365 "config": [] 00:03:59.365 }, 00:03:59.365 { 00:03:59.365 "subsystem": "accel", 00:03:59.365 "config": [ 00:03:59.365 { 00:03:59.365 "method": "accel_set_options", 00:03:59.365 "params": { 00:03:59.365 "small_cache_size": 128, 00:03:59.365 "large_cache_size": 16, 00:03:59.365 "task_count": 2048, 00:03:59.365 "sequence_count": 2048, 00:03:59.365 "buf_count": 2048 00:03:59.366 } 00:03:59.366 } 00:03:59.366 ] 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "bdev", 00:03:59.366 "config": [ 00:03:59.366 { 00:03:59.366 "method": "bdev_set_options", 00:03:59.366 "params": { 00:03:59.366 "bdev_io_pool_size": 65535, 00:03:59.366 "bdev_io_cache_size": 256, 00:03:59.366 "bdev_auto_examine": true, 00:03:59.366 "iobuf_small_cache_size": 128, 00:03:59.366 "iobuf_large_cache_size": 16 00:03:59.366 } 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "method": "bdev_raid_set_options", 00:03:59.366 "params": { 00:03:59.366 "process_window_size_kb": 1024, 00:03:59.366 "process_max_bandwidth_mb_sec": 0 00:03:59.366 } 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "method": "bdev_iscsi_set_options", 00:03:59.366 "params": { 00:03:59.366 "timeout_sec": 30 00:03:59.366 } 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "method": "bdev_nvme_set_options", 00:03:59.366 "params": { 00:03:59.366 "action_on_timeout": "none", 00:03:59.366 "timeout_us": 0, 00:03:59.366 "timeout_admin_us": 0, 00:03:59.366 "keep_alive_timeout_ms": 10000, 00:03:59.366 "arbitration_burst": 0, 00:03:59.366 "low_priority_weight": 0, 00:03:59.366 "medium_priority_weight": 0, 00:03:59.366 "high_priority_weight": 0, 00:03:59.366 "nvme_adminq_poll_period_us": 10000, 00:03:59.366 "nvme_ioq_poll_period_us": 0, 00:03:59.366 "io_queue_requests": 0, 00:03:59.366 "delay_cmd_submit": true, 00:03:59.366 "transport_retry_count": 4, 00:03:59.366 "bdev_retry_count": 3, 00:03:59.366 "transport_ack_timeout": 0, 00:03:59.366 "ctrlr_loss_timeout_sec": 0, 00:03:59.366 "reconnect_delay_sec": 0, 00:03:59.366 "fast_io_fail_timeout_sec": 0, 00:03:59.366 "disable_auto_failback": false, 00:03:59.366 "generate_uuids": false, 00:03:59.366 "transport_tos": 0, 00:03:59.366 "nvme_error_stat": false, 00:03:59.366 "rdma_srq_size": 0, 00:03:59.366 "io_path_stat": false, 00:03:59.366 "allow_accel_sequence": false, 00:03:59.366 "rdma_max_cq_size": 0, 00:03:59.366 "rdma_cm_event_timeout_ms": 0, 00:03:59.366 "dhchap_digests": [ 00:03:59.366 "sha256", 00:03:59.366 "sha384", 00:03:59.366 "sha512" 00:03:59.366 ], 00:03:59.366 "dhchap_dhgroups": [ 00:03:59.366 "null", 00:03:59.366 "ffdhe2048", 00:03:59.366 "ffdhe3072", 00:03:59.366 "ffdhe4096", 00:03:59.366 "ffdhe6144", 00:03:59.366 "ffdhe8192" 00:03:59.366 ] 00:03:59.366 } 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "method": "bdev_nvme_set_hotplug", 00:03:59.366 "params": { 00:03:59.366 "period_us": 100000, 00:03:59.366 "enable": false 00:03:59.366 } 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "method": "bdev_wait_for_examine" 00:03:59.366 } 00:03:59.366 ] 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "scsi", 00:03:59.366 "config": null 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "scheduler", 00:03:59.366 "config": [ 00:03:59.366 { 00:03:59.366 "method": "framework_set_scheduler", 00:03:59.366 "params": { 00:03:59.366 "name": "static" 00:03:59.366 } 00:03:59.366 } 00:03:59.366 ] 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "vhost_scsi", 00:03:59.366 "config": [] 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "vhost_blk", 00:03:59.366 "config": [] 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "ublk", 00:03:59.366 "config": [] 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "nbd", 00:03:59.366 "config": [] 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "nvmf", 00:03:59.366 "config": [ 00:03:59.366 { 00:03:59.366 "method": "nvmf_set_config", 00:03:59.366 "params": { 00:03:59.366 "discovery_filter": "match_any", 00:03:59.366 "admin_cmd_passthru": { 00:03:59.366 "identify_ctrlr": false 00:03:59.366 }, 00:03:59.366 "dhchap_digests": [ 00:03:59.366 "sha256", 00:03:59.366 "sha384", 00:03:59.366 "sha512" 00:03:59.366 ], 00:03:59.366 "dhchap_dhgroups": [ 00:03:59.366 "null", 00:03:59.366 "ffdhe2048", 00:03:59.366 "ffdhe3072", 00:03:59.366 "ffdhe4096", 00:03:59.366 "ffdhe6144", 00:03:59.366 "ffdhe8192" 00:03:59.366 ] 00:03:59.366 } 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "method": "nvmf_set_max_subsystems", 00:03:59.366 "params": { 00:03:59.366 "max_subsystems": 1024 00:03:59.366 } 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "method": "nvmf_set_crdt", 00:03:59.366 "params": { 00:03:59.366 "crdt1": 0, 00:03:59.366 "crdt2": 0, 00:03:59.366 "crdt3": 0 00:03:59.366 } 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "method": "nvmf_create_transport", 00:03:59.366 "params": { 00:03:59.366 "trtype": "TCP", 00:03:59.366 "max_queue_depth": 128, 00:03:59.366 "max_io_qpairs_per_ctrlr": 127, 00:03:59.366 "in_capsule_data_size": 4096, 00:03:59.366 "max_io_size": 131072, 00:03:59.366 "io_unit_size": 131072, 00:03:59.366 "max_aq_depth": 128, 00:03:59.366 "num_shared_buffers": 511, 00:03:59.366 "buf_cache_size": 4294967295, 00:03:59.366 "dif_insert_or_strip": false, 00:03:59.366 "zcopy": false, 00:03:59.366 "c2h_success": true, 00:03:59.366 "sock_priority": 0, 00:03:59.366 "abort_timeout_sec": 1, 00:03:59.366 "ack_timeout": 0, 00:03:59.366 "data_wr_pool_size": 0 00:03:59.366 } 00:03:59.366 } 00:03:59.366 ] 00:03:59.366 }, 00:03:59.366 { 00:03:59.366 "subsystem": "iscsi", 00:03:59.366 "config": [ 00:03:59.366 { 00:03:59.366 "method": "iscsi_set_options", 00:03:59.366 "params": { 00:03:59.366 "node_base": "iqn.2016-06.io.spdk", 00:03:59.366 "max_sessions": 128, 00:03:59.366 "max_connections_per_session": 2, 00:03:59.366 "max_queue_depth": 64, 00:03:59.366 "default_time2wait": 2, 00:03:59.366 "default_time2retain": 20, 00:03:59.366 "first_burst_length": 8192, 00:03:59.366 "immediate_data": true, 00:03:59.366 "allow_duplicated_isid": false, 00:03:59.366 "error_recovery_level": 0, 00:03:59.366 "nop_timeout": 60, 00:03:59.366 "nop_in_interval": 30, 00:03:59.366 "disable_chap": false, 00:03:59.366 "require_chap": false, 00:03:59.366 "mutual_chap": false, 00:03:59.366 "chap_group": 0, 00:03:59.366 "max_large_datain_per_connection": 64, 00:03:59.366 "max_r2t_per_connection": 4, 00:03:59.366 "pdu_pool_size": 36864, 00:03:59.366 "immediate_data_pool_size": 16384, 00:03:59.366 "data_out_pool_size": 2048 00:03:59.366 } 00:03:59.366 } 00:03:59.366 ] 00:03:59.366 } 00:03:59.366 ] 00:03:59.366 } 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1686249 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1686249 ']' 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1686249 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686249 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686249' 00:03:59.366 killing process with pid 1686249 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1686249 00:03:59.366 11:41:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1686249 00:03:59.627 11:41:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1686589 00:03:59.627 11:41:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:59.627 11:41:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1686589 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1686589 ']' 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1686589 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686589 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686589' 00:04:04.915 killing process with pid 1686589 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1686589 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1686589 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.915 00:04:04.915 real 0m6.542s 00:04:04.915 user 0m6.440s 00:04:04.915 sys 0m0.567s 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.915 ************************************ 00:04:04.915 END TEST skip_rpc_with_json 00:04:04.915 ************************************ 00:04:04.915 11:41:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.915 11:41:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.915 11:41:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.915 11:41:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.915 ************************************ 00:04:04.915 START TEST skip_rpc_with_delay 00:04:04.915 ************************************ 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.915 [2024-10-11 11:41:07.530714] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:04.915 00:04:04.915 real 0m0.082s 00:04:04.915 user 0m0.052s 00:04:04.915 sys 0m0.029s 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.915 11:41:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:04.915 ************************************ 00:04:04.915 END TEST skip_rpc_with_delay 00:04:04.915 ************************************ 00:04:04.915 11:41:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.915 11:41:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.915 11:41:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.915 11:41:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.915 11:41:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.915 11:41:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.176 ************************************ 00:04:05.176 START TEST exit_on_failed_rpc_init 00:04:05.176 ************************************ 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1687658 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1687658 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1687658 ']' 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.176 11:41:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.176 [2024-10-11 11:41:07.692363] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:05.176 [2024-10-11 11:41:07.692410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687658 ] 00:04:05.176 [2024-10-11 11:41:07.768467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.176 [2024-10-11 11:41:07.800216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.119 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.120 [2024-10-11 11:41:08.553176] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:06.120 [2024-10-11 11:41:08.553228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687970 ] 00:04:06.120 [2024-10-11 11:41:08.632132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.120 [2024-10-11 11:41:08.668289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.120 [2024-10-11 11:41:08.668343] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:06.120 [2024-10-11 11:41:08.668353] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:06.120 [2024-10-11 11:41:08.668361] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1687658 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1687658 ']' 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1687658 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1687658 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1687658' 00:04:06.120 killing process with pid 1687658 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1687658 00:04:06.120 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1687658 00:04:06.382 00:04:06.382 real 0m1.324s 00:04:06.382 user 0m1.558s 00:04:06.382 sys 0m0.374s 00:04:06.382 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.382 11:41:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.382 ************************************ 00:04:06.382 END TEST exit_on_failed_rpc_init 00:04:06.382 ************************************ 00:04:06.382 11:41:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.382 00:04:06.382 real 0m13.735s 00:04:06.382 user 0m13.287s 00:04:06.382 sys 0m1.590s 00:04:06.382 11:41:08 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.382 11:41:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.382 ************************************ 00:04:06.382 END TEST skip_rpc 00:04:06.382 ************************************ 00:04:06.382 11:41:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:06.382 11:41:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.382 11:41:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.382 11:41:09 -- common/autotest_common.sh@10 -- # set +x 00:04:06.382 ************************************ 00:04:06.382 START TEST rpc_client 00:04:06.382 ************************************ 00:04:06.382 11:41:09 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:06.643 * Looking for test storage... 00:04:06.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.643 11:41:09 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.643 --rc genhtml_branch_coverage=1 00:04:06.643 --rc genhtml_function_coverage=1 00:04:06.643 --rc genhtml_legend=1 00:04:06.643 --rc geninfo_all_blocks=1 00:04:06.643 --rc geninfo_unexecuted_blocks=1 00:04:06.643 00:04:06.643 ' 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.643 --rc genhtml_branch_coverage=1 00:04:06.643 --rc genhtml_function_coverage=1 00:04:06.643 --rc genhtml_legend=1 00:04:06.643 --rc geninfo_all_blocks=1 00:04:06.643 --rc geninfo_unexecuted_blocks=1 00:04:06.643 00:04:06.643 ' 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.643 --rc genhtml_branch_coverage=1 00:04:06.643 --rc genhtml_function_coverage=1 00:04:06.643 --rc genhtml_legend=1 00:04:06.643 --rc geninfo_all_blocks=1 00:04:06.643 --rc geninfo_unexecuted_blocks=1 00:04:06.643 00:04:06.643 ' 00:04:06.643 11:41:09 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.643 --rc genhtml_branch_coverage=1 00:04:06.643 --rc genhtml_function_coverage=1 00:04:06.643 --rc genhtml_legend=1 00:04:06.643 --rc geninfo_all_blocks=1 00:04:06.643 --rc geninfo_unexecuted_blocks=1 00:04:06.643 00:04:06.643 ' 00:04:06.643 11:41:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:06.643 OK 00:04:06.643 11:41:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:06.643 00:04:06.643 real 0m0.215s 00:04:06.643 user 0m0.131s 00:04:06.643 sys 0m0.099s 00:04:06.644 11:41:09 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.644 11:41:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:06.644 ************************************ 00:04:06.644 END TEST rpc_client 00:04:06.644 ************************************ 00:04:06.644 11:41:09 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:06.644 11:41:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.644 11:41:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.644 11:41:09 -- common/autotest_common.sh@10 -- # set +x 00:04:06.905 ************************************ 00:04:06.905 START TEST json_config 00:04:06.905 ************************************ 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.905 11:41:09 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.905 11:41:09 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.905 11:41:09 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.905 11:41:09 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.905 11:41:09 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.905 11:41:09 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.905 11:41:09 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.905 11:41:09 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.905 11:41:09 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.905 11:41:09 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.905 11:41:09 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.905 11:41:09 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:06.905 11:41:09 json_config -- scripts/common.sh@345 -- # : 1 00:04:06.905 11:41:09 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.905 11:41:09 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.905 11:41:09 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:06.905 11:41:09 json_config -- scripts/common.sh@353 -- # local d=1 00:04:06.905 11:41:09 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.905 11:41:09 json_config -- scripts/common.sh@355 -- # echo 1 00:04:06.905 11:41:09 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.905 11:41:09 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:06.905 11:41:09 json_config -- scripts/common.sh@353 -- # local d=2 00:04:06.905 11:41:09 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.905 11:41:09 json_config -- scripts/common.sh@355 -- # echo 2 00:04:06.905 11:41:09 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.905 11:41:09 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.905 11:41:09 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.905 11:41:09 json_config -- scripts/common.sh@368 -- # return 0 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.905 --rc genhtml_branch_coverage=1 00:04:06.905 --rc genhtml_function_coverage=1 00:04:06.905 --rc genhtml_legend=1 00:04:06.905 --rc geninfo_all_blocks=1 00:04:06.905 --rc geninfo_unexecuted_blocks=1 00:04:06.905 00:04:06.905 ' 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.905 --rc genhtml_branch_coverage=1 00:04:06.905 --rc genhtml_function_coverage=1 00:04:06.905 --rc genhtml_legend=1 00:04:06.905 --rc geninfo_all_blocks=1 00:04:06.905 --rc geninfo_unexecuted_blocks=1 00:04:06.905 00:04:06.905 ' 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.905 --rc genhtml_branch_coverage=1 00:04:06.905 --rc genhtml_function_coverage=1 00:04:06.905 --rc genhtml_legend=1 00:04:06.905 --rc geninfo_all_blocks=1 00:04:06.905 --rc geninfo_unexecuted_blocks=1 00:04:06.905 00:04:06.905 ' 00:04:06.905 11:41:09 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.905 --rc genhtml_branch_coverage=1 00:04:06.905 --rc genhtml_function_coverage=1 00:04:06.905 --rc genhtml_legend=1 00:04:06.905 --rc geninfo_all_blocks=1 00:04:06.906 --rc geninfo_unexecuted_blocks=1 00:04:06.906 00:04:06.906 ' 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.906 11:41:09 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.906 11:41:09 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.906 11:41:09 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.906 11:41:09 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.906 11:41:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.906 11:41:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.906 11:41:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.906 11:41:09 json_config -- paths/export.sh@5 -- # export PATH 00:04:06.906 11:41:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@51 -- # : 0 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.906 11:41:09 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:06.906 INFO: JSON configuration test init 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.906 11:41:09 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:06.906 11:41:09 json_config -- json_config/common.sh@9 -- # local app=target 00:04:06.906 11:41:09 json_config -- json_config/common.sh@10 -- # shift 00:04:06.906 11:41:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.906 11:41:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.906 11:41:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.906 11:41:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.906 11:41:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.906 11:41:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1688139 00:04:06.906 11:41:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.906 Waiting for target to run... 00:04:06.906 11:41:09 json_config -- json_config/common.sh@25 -- # waitforlisten 1688139 /var/tmp/spdk_tgt.sock 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@831 -- # '[' -z 1688139 ']' 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.906 11:41:09 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:06.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:06.906 11:41:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.167 [2024-10-11 11:41:09.647587] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:07.167 [2024-10-11 11:41:09.647657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688139 ] 00:04:07.428 [2024-10-11 11:41:10.094249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.428 [2024-10-11 11:41:10.130296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.013 11:41:10 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.013 11:41:10 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:08.013 11:41:10 json_config -- json_config/common.sh@26 -- # echo '' 00:04:08.013 00:04:08.013 11:41:10 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:08.013 11:41:10 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:08.013 11:41:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.013 11:41:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.013 11:41:10 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:08.013 11:41:10 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:08.013 11:41:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.013 11:41:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.013 11:41:10 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:08.013 11:41:10 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:08.013 11:41:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:08.691 11:41:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.691 11:41:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:08.691 11:41:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@54 -- # sort 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:08.691 11:41:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.691 11:41:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:08.691 11:41:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.691 11:41:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:08.691 11:41:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:08.691 11:41:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:08.951 MallocForNvmf0 00:04:08.951 11:41:11 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:08.951 11:41:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:08.951 MallocForNvmf1 00:04:08.951 11:41:11 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:08.951 11:41:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.211 [2024-10-11 11:41:11.791225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.211 11:41:11 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.211 11:41:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:09.471 11:41:11 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.471 11:41:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:09.471 11:41:12 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:09.471 11:41:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:09.732 11:41:12 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:09.732 11:41:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:09.992 [2024-10-11 11:41:12.441216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:09.992 11:41:12 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:09.992 11:41:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.992 11:41:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.992 11:41:12 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:09.992 11:41:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.992 11:41:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.992 11:41:12 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:09.992 11:41:12 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:09.992 11:41:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:09.992 MallocBdevForConfigChangeCheck 00:04:10.253 11:41:12 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:10.253 11:41:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.253 11:41:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.253 11:41:12 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:10.253 11:41:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.512 11:41:13 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:10.512 INFO: shutting down applications... 00:04:10.512 11:41:13 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:10.512 11:41:13 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:10.512 11:41:13 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:10.512 11:41:13 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:10.782 Calling clear_iscsi_subsystem 00:04:10.782 Calling clear_nvmf_subsystem 00:04:10.782 Calling clear_nbd_subsystem 00:04:10.782 Calling clear_ublk_subsystem 00:04:10.782 Calling clear_vhost_blk_subsystem 00:04:10.782 Calling clear_vhost_scsi_subsystem 00:04:10.782 Calling clear_bdev_subsystem 00:04:11.047 11:41:13 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:11.047 11:41:13 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:11.048 11:41:13 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:11.048 11:41:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.048 11:41:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:11.048 11:41:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:11.308 11:41:13 json_config -- json_config/json_config.sh@352 -- # break 00:04:11.308 11:41:13 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:11.308 11:41:13 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:11.308 11:41:13 json_config -- json_config/common.sh@31 -- # local app=target 00:04:11.308 11:41:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:11.308 11:41:13 json_config -- json_config/common.sh@35 -- # [[ -n 1688139 ]] 00:04:11.308 11:41:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1688139 00:04:11.308 11:41:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:11.308 11:41:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.308 11:41:13 json_config -- json_config/common.sh@41 -- # kill -0 1688139 00:04:11.308 11:41:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.877 11:41:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.877 11:41:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.877 11:41:14 json_config -- json_config/common.sh@41 -- # kill -0 1688139 00:04:11.877 11:41:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.877 11:41:14 json_config -- json_config/common.sh@43 -- # break 00:04:11.877 11:41:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.877 11:41:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.877 SPDK target shutdown done 00:04:11.877 11:41:14 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:11.877 INFO: relaunching applications... 00:04:11.877 11:41:14 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.877 11:41:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:11.877 11:41:14 json_config -- json_config/common.sh@10 -- # shift 00:04:11.877 11:41:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.877 11:41:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.877 11:41:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.877 11:41:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.877 11:41:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.877 11:41:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1689269 00:04:11.877 11:41:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.877 Waiting for target to run... 00:04:11.877 11:41:14 json_config -- json_config/common.sh@25 -- # waitforlisten 1689269 /var/tmp/spdk_tgt.sock 00:04:11.877 11:41:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.877 11:41:14 json_config -- common/autotest_common.sh@831 -- # '[' -z 1689269 ']' 00:04:11.877 11:41:14 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.877 11:41:14 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:11.877 11:41:14 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.877 11:41:14 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:11.877 11:41:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.877 [2024-10-11 11:41:14.425756] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:11.878 [2024-10-11 11:41:14.425819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689269 ] 00:04:12.138 [2024-10-11 11:41:14.736362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.138 [2024-10-11 11:41:14.762911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.708 [2024-10-11 11:41:15.265634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:12.708 [2024-10-11 11:41:15.297988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:12.708 11:41:15 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:12.708 11:41:15 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:12.708 11:41:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:12.708 00:04:12.708 11:41:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:12.708 11:41:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:12.708 INFO: Checking if target configuration is the same... 00:04:12.708 11:41:15 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.708 11:41:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:12.708 11:41:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.708 + '[' 2 -ne 2 ']' 00:04:12.708 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:12.708 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:12.708 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:12.708 +++ basename /dev/fd/62 00:04:12.708 ++ mktemp /tmp/62.XXX 00:04:12.708 + tmp_file_1=/tmp/62.2zf 00:04:12.708 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.708 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:12.708 + tmp_file_2=/tmp/spdk_tgt_config.json.3q7 00:04:12.708 + ret=0 00:04:12.708 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.968 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:13.228 + diff -u /tmp/62.2zf /tmp/spdk_tgt_config.json.3q7 00:04:13.228 + echo 'INFO: JSON config files are the same' 00:04:13.228 INFO: JSON config files are the same 00:04:13.228 + rm /tmp/62.2zf /tmp/spdk_tgt_config.json.3q7 00:04:13.228 + exit 0 00:04:13.228 11:41:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:13.228 11:41:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:13.228 INFO: changing configuration and checking if this can be detected... 00:04:13.228 11:41:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:13.228 11:41:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:13.228 11:41:15 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.228 11:41:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:13.228 11:41:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.228 + '[' 2 -ne 2 ']' 00:04:13.228 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:13.228 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:13.228 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:13.228 +++ basename /dev/fd/62 00:04:13.487 ++ mktemp /tmp/62.XXX 00:04:13.487 + tmp_file_1=/tmp/62.aMI 00:04:13.487 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.487 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:13.487 + tmp_file_2=/tmp/spdk_tgt_config.json.hro 00:04:13.487 + ret=0 00:04:13.487 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:13.747 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:13.747 + diff -u /tmp/62.aMI /tmp/spdk_tgt_config.json.hro 00:04:13.747 + ret=1 00:04:13.747 + echo '=== Start of file: /tmp/62.aMI ===' 00:04:13.747 + cat /tmp/62.aMI 00:04:13.747 + echo '=== End of file: /tmp/62.aMI ===' 00:04:13.747 + echo '' 00:04:13.747 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hro ===' 00:04:13.747 + cat /tmp/spdk_tgt_config.json.hro 00:04:13.747 + echo '=== End of file: /tmp/spdk_tgt_config.json.hro ===' 00:04:13.747 + echo '' 00:04:13.747 + rm /tmp/62.aMI /tmp/spdk_tgt_config.json.hro 00:04:13.747 + exit 1 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:13.747 INFO: configuration change detected. 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 1689269 ]] 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.747 11:41:16 json_config -- json_config/json_config.sh@330 -- # killprocess 1689269 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@950 -- # '[' -z 1689269 ']' 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@954 -- # kill -0 1689269 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@955 -- # uname 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1689269 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1689269' 00:04:13.747 killing process with pid 1689269 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@969 -- # kill 1689269 00:04:13.747 11:41:16 json_config -- common/autotest_common.sh@974 -- # wait 1689269 00:04:14.007 11:41:16 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.007 11:41:16 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:14.007 11:41:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.007 11:41:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.267 11:41:16 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:14.267 11:41:16 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:14.267 INFO: Success 00:04:14.267 00:04:14.267 real 0m7.368s 00:04:14.267 user 0m8.959s 00:04:14.267 sys 0m1.889s 00:04:14.267 11:41:16 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.267 11:41:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.267 ************************************ 00:04:14.267 END TEST json_config 00:04:14.267 ************************************ 00:04:14.267 11:41:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.267 11:41:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.267 11:41:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.267 11:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:14.267 ************************************ 00:04:14.267 START TEST json_config_extra_key 00:04:14.267 ************************************ 00:04:14.267 11:41:16 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.267 11:41:16 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.267 11:41:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.267 11:41:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.267 11:41:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.267 11:41:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.528 11:41:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:14.528 11:41:16 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.528 11:41:16 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.528 --rc genhtml_branch_coverage=1 00:04:14.528 --rc genhtml_function_coverage=1 00:04:14.528 --rc genhtml_legend=1 00:04:14.528 --rc geninfo_all_blocks=1 00:04:14.528 --rc geninfo_unexecuted_blocks=1 00:04:14.528 00:04:14.528 ' 00:04:14.528 11:41:16 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.528 --rc genhtml_branch_coverage=1 00:04:14.528 --rc genhtml_function_coverage=1 00:04:14.528 --rc genhtml_legend=1 00:04:14.528 --rc geninfo_all_blocks=1 00:04:14.528 --rc geninfo_unexecuted_blocks=1 00:04:14.528 00:04:14.528 ' 00:04:14.528 11:41:16 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.528 --rc genhtml_branch_coverage=1 00:04:14.528 --rc genhtml_function_coverage=1 00:04:14.528 --rc genhtml_legend=1 00:04:14.528 --rc geninfo_all_blocks=1 00:04:14.528 --rc geninfo_unexecuted_blocks=1 00:04:14.528 00:04:14.528 ' 00:04:14.528 11:41:16 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.528 --rc genhtml_branch_coverage=1 00:04:14.528 --rc genhtml_function_coverage=1 00:04:14.528 --rc genhtml_legend=1 00:04:14.528 --rc geninfo_all_blocks=1 00:04:14.528 --rc geninfo_unexecuted_blocks=1 00:04:14.528 00:04:14.528 ' 00:04:14.528 11:41:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.528 11:41:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:14.528 11:41:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:14.528 11:41:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.528 11:41:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.528 11:41:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.528 11:41:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.528 11:41:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.528 11:41:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.528 11:41:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:14.528 11:41:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:14.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:14.528 11:41:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:14.528 INFO: launching applications... 00:04:14.528 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1690055 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.528 Waiting for target to run... 00:04:14.528 11:41:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1690055 /var/tmp/spdk_tgt.sock 00:04:14.528 11:41:17 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1690055 ']' 00:04:14.528 11:41:17 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.529 11:41:17 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.529 11:41:17 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:14.529 11:41:17 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.529 11:41:17 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:14.529 11:41:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:14.529 [2024-10-11 11:41:17.092108] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:14.529 [2024-10-11 11:41:17.092178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690055 ] 00:04:14.789 [2024-10-11 11:41:17.427712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.789 [2024-10-11 11:41:17.458490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.359 11:41:17 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.359 11:41:17 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:15.359 11:41:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:15.359 00:04:15.359 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:15.359 INFO: shutting down applications... 00:04:15.359 11:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:15.359 11:41:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:15.360 11:41:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:15.360 11:41:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1690055 ]] 00:04:15.360 11:41:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1690055 00:04:15.360 11:41:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:15.360 11:41:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.360 11:41:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1690055 00:04:15.360 11:41:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.932 11:41:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.932 11:41:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.932 11:41:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1690055 00:04:15.932 11:41:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:15.932 11:41:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:15.932 11:41:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:15.932 11:41:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:15.932 SPDK target shutdown done 00:04:15.932 11:41:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:15.932 Success 00:04:15.932 00:04:15.932 real 0m1.561s 00:04:15.932 user 0m1.108s 00:04:15.932 sys 0m0.461s 00:04:15.932 11:41:18 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.932 11:41:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:15.932 ************************************ 00:04:15.932 END TEST json_config_extra_key 00:04:15.932 ************************************ 00:04:15.932 11:41:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.932 11:41:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.932 11:41:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.932 11:41:18 -- common/autotest_common.sh@10 -- # set +x 00:04:15.932 ************************************ 00:04:15.932 START TEST alias_rpc 00:04:15.932 ************************************ 00:04:15.932 11:41:18 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.932 * Looking for test storage... 00:04:15.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:15.932 11:41:18 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:15.932 11:41:18 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:15.932 11:41:18 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:15.932 11:41:18 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:15.932 11:41:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.932 11:41:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.932 11:41:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.932 11:41:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.932 11:41:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.932 11:41:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.932 11:41:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.192 11:41:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:16.192 11:41:18 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.192 11:41:18 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.192 --rc genhtml_branch_coverage=1 00:04:16.192 --rc genhtml_function_coverage=1 00:04:16.192 --rc genhtml_legend=1 00:04:16.192 --rc geninfo_all_blocks=1 00:04:16.192 --rc geninfo_unexecuted_blocks=1 00:04:16.192 00:04:16.192 ' 00:04:16.192 11:41:18 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.192 --rc genhtml_branch_coverage=1 00:04:16.192 --rc genhtml_function_coverage=1 00:04:16.192 --rc genhtml_legend=1 00:04:16.192 --rc geninfo_all_blocks=1 00:04:16.193 --rc geninfo_unexecuted_blocks=1 00:04:16.193 00:04:16.193 ' 00:04:16.193 11:41:18 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.193 --rc genhtml_branch_coverage=1 00:04:16.193 --rc genhtml_function_coverage=1 00:04:16.193 --rc genhtml_legend=1 00:04:16.193 --rc geninfo_all_blocks=1 00:04:16.193 --rc geninfo_unexecuted_blocks=1 00:04:16.193 00:04:16.193 ' 00:04:16.193 11:41:18 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.193 --rc genhtml_branch_coverage=1 00:04:16.193 --rc genhtml_function_coverage=1 00:04:16.193 --rc genhtml_legend=1 00:04:16.193 --rc geninfo_all_blocks=1 00:04:16.193 --rc geninfo_unexecuted_blocks=1 00:04:16.193 00:04:16.193 ' 00:04:16.193 11:41:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:16.193 11:41:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1690447 00:04:16.193 11:41:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1690447 00:04:16.193 11:41:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.193 11:41:18 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1690447 ']' 00:04:16.193 11:41:18 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.193 11:41:18 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:16.193 11:41:18 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.193 11:41:18 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:16.193 11:41:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.193 [2024-10-11 11:41:18.719088] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:16.193 [2024-10-11 11:41:18.719145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690447 ] 00:04:16.193 [2024-10-11 11:41:18.799670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.193 [2024-10-11 11:41:18.832583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.132 11:41:19 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:17.132 11:41:19 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:17.132 11:41:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:17.132 11:41:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1690447 00:04:17.132 11:41:19 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1690447 ']' 00:04:17.132 11:41:19 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1690447 00:04:17.132 11:41:19 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:17.132 11:41:19 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.133 11:41:19 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690447 00:04:17.133 11:41:19 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.133 11:41:19 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.133 11:41:19 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690447' 00:04:17.133 killing process with pid 1690447 00:04:17.133 11:41:19 alias_rpc -- common/autotest_common.sh@969 -- # kill 1690447 00:04:17.133 11:41:19 alias_rpc -- common/autotest_common.sh@974 -- # wait 1690447 00:04:17.393 00:04:17.393 real 0m1.502s 00:04:17.393 user 0m1.672s 00:04:17.393 sys 0m0.404s 00:04:17.393 11:41:19 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.393 11:41:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.393 ************************************ 00:04:17.393 END TEST alias_rpc 00:04:17.393 ************************************ 00:04:17.393 11:41:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:17.393 11:41:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:17.393 11:41:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.393 11:41:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.393 11:41:19 -- common/autotest_common.sh@10 -- # set +x 00:04:17.393 ************************************ 00:04:17.393 START TEST spdkcli_tcp 00:04:17.393 ************************************ 00:04:17.393 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:17.654 * Looking for test storage... 00:04:17.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.654 11:41:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:17.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.654 --rc genhtml_branch_coverage=1 00:04:17.654 --rc genhtml_function_coverage=1 00:04:17.654 --rc genhtml_legend=1 00:04:17.654 --rc geninfo_all_blocks=1 00:04:17.654 --rc geninfo_unexecuted_blocks=1 00:04:17.654 00:04:17.654 ' 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:17.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.654 --rc genhtml_branch_coverage=1 00:04:17.654 --rc genhtml_function_coverage=1 00:04:17.654 --rc genhtml_legend=1 00:04:17.654 --rc geninfo_all_blocks=1 00:04:17.654 --rc geninfo_unexecuted_blocks=1 00:04:17.654 00:04:17.654 ' 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:17.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.654 --rc genhtml_branch_coverage=1 00:04:17.654 --rc genhtml_function_coverage=1 00:04:17.654 --rc genhtml_legend=1 00:04:17.654 --rc geninfo_all_blocks=1 00:04:17.654 --rc geninfo_unexecuted_blocks=1 00:04:17.654 00:04:17.654 ' 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:17.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.654 --rc genhtml_branch_coverage=1 00:04:17.654 --rc genhtml_function_coverage=1 00:04:17.654 --rc genhtml_legend=1 00:04:17.654 --rc geninfo_all_blocks=1 00:04:17.654 --rc geninfo_unexecuted_blocks=1 00:04:17.654 00:04:17.654 ' 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1690812 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1690812 00:04:17.654 11:41:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1690812 ']' 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:17.654 11:41:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.654 [2024-10-11 11:41:20.303926] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:17.654 [2024-10-11 11:41:20.304000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690812 ] 00:04:17.915 [2024-10-11 11:41:20.384250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.915 [2024-10-11 11:41:20.421889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.915 [2024-10-11 11:41:20.421890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.485 11:41:21 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:18.485 11:41:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:18.485 11:41:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1690867 00:04:18.485 11:41:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:18.485 11:41:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:18.745 [ 00:04:18.745 "bdev_malloc_delete", 00:04:18.745 "bdev_malloc_create", 00:04:18.745 "bdev_null_resize", 00:04:18.745 "bdev_null_delete", 00:04:18.745 "bdev_null_create", 00:04:18.745 "bdev_nvme_cuse_unregister", 00:04:18.745 "bdev_nvme_cuse_register", 00:04:18.745 "bdev_opal_new_user", 00:04:18.745 "bdev_opal_set_lock_state", 00:04:18.745 "bdev_opal_delete", 00:04:18.745 "bdev_opal_get_info", 00:04:18.745 "bdev_opal_create", 00:04:18.745 "bdev_nvme_opal_revert", 00:04:18.745 "bdev_nvme_opal_init", 00:04:18.745 "bdev_nvme_send_cmd", 00:04:18.745 "bdev_nvme_set_keys", 00:04:18.745 "bdev_nvme_get_path_iostat", 00:04:18.745 "bdev_nvme_get_mdns_discovery_info", 00:04:18.745 "bdev_nvme_stop_mdns_discovery", 00:04:18.745 "bdev_nvme_start_mdns_discovery", 00:04:18.745 "bdev_nvme_set_multipath_policy", 00:04:18.745 "bdev_nvme_set_preferred_path", 00:04:18.745 "bdev_nvme_get_io_paths", 00:04:18.746 "bdev_nvme_remove_error_injection", 00:04:18.746 "bdev_nvme_add_error_injection", 00:04:18.746 "bdev_nvme_get_discovery_info", 00:04:18.746 "bdev_nvme_stop_discovery", 00:04:18.746 "bdev_nvme_start_discovery", 00:04:18.746 "bdev_nvme_get_controller_health_info", 00:04:18.746 "bdev_nvme_disable_controller", 00:04:18.746 "bdev_nvme_enable_controller", 00:04:18.746 "bdev_nvme_reset_controller", 00:04:18.746 "bdev_nvme_get_transport_statistics", 00:04:18.746 "bdev_nvme_apply_firmware", 00:04:18.746 "bdev_nvme_detach_controller", 00:04:18.746 "bdev_nvme_get_controllers", 00:04:18.746 "bdev_nvme_attach_controller", 00:04:18.746 "bdev_nvme_set_hotplug", 00:04:18.746 "bdev_nvme_set_options", 00:04:18.746 "bdev_passthru_delete", 00:04:18.746 "bdev_passthru_create", 00:04:18.746 "bdev_lvol_set_parent_bdev", 00:04:18.746 "bdev_lvol_set_parent", 00:04:18.746 "bdev_lvol_check_shallow_copy", 00:04:18.746 "bdev_lvol_start_shallow_copy", 00:04:18.746 "bdev_lvol_grow_lvstore", 00:04:18.746 "bdev_lvol_get_lvols", 00:04:18.746 "bdev_lvol_get_lvstores", 00:04:18.746 "bdev_lvol_delete", 00:04:18.746 "bdev_lvol_set_read_only", 00:04:18.746 "bdev_lvol_resize", 00:04:18.746 "bdev_lvol_decouple_parent", 00:04:18.746 "bdev_lvol_inflate", 00:04:18.746 "bdev_lvol_rename", 00:04:18.746 "bdev_lvol_clone_bdev", 00:04:18.746 "bdev_lvol_clone", 00:04:18.746 "bdev_lvol_snapshot", 00:04:18.746 "bdev_lvol_create", 00:04:18.746 "bdev_lvol_delete_lvstore", 00:04:18.746 "bdev_lvol_rename_lvstore", 00:04:18.746 "bdev_lvol_create_lvstore", 00:04:18.746 "bdev_raid_set_options", 00:04:18.746 "bdev_raid_remove_base_bdev", 00:04:18.746 "bdev_raid_add_base_bdev", 00:04:18.746 "bdev_raid_delete", 00:04:18.746 "bdev_raid_create", 00:04:18.746 "bdev_raid_get_bdevs", 00:04:18.746 "bdev_error_inject_error", 00:04:18.746 "bdev_error_delete", 00:04:18.746 "bdev_error_create", 00:04:18.746 "bdev_split_delete", 00:04:18.746 "bdev_split_create", 00:04:18.746 "bdev_delay_delete", 00:04:18.746 "bdev_delay_create", 00:04:18.746 "bdev_delay_update_latency", 00:04:18.746 "bdev_zone_block_delete", 00:04:18.746 "bdev_zone_block_create", 00:04:18.746 "blobfs_create", 00:04:18.746 "blobfs_detect", 00:04:18.746 "blobfs_set_cache_size", 00:04:18.746 "bdev_aio_delete", 00:04:18.746 "bdev_aio_rescan", 00:04:18.746 "bdev_aio_create", 00:04:18.746 "bdev_ftl_set_property", 00:04:18.746 "bdev_ftl_get_properties", 00:04:18.746 "bdev_ftl_get_stats", 00:04:18.746 "bdev_ftl_unmap", 00:04:18.746 "bdev_ftl_unload", 00:04:18.746 "bdev_ftl_delete", 00:04:18.746 "bdev_ftl_load", 00:04:18.746 "bdev_ftl_create", 00:04:18.746 "bdev_virtio_attach_controller", 00:04:18.746 "bdev_virtio_scsi_get_devices", 00:04:18.746 "bdev_virtio_detach_controller", 00:04:18.746 "bdev_virtio_blk_set_hotplug", 00:04:18.746 "bdev_iscsi_delete", 00:04:18.746 "bdev_iscsi_create", 00:04:18.746 "bdev_iscsi_set_options", 00:04:18.746 "accel_error_inject_error", 00:04:18.746 "ioat_scan_accel_module", 00:04:18.746 "dsa_scan_accel_module", 00:04:18.746 "iaa_scan_accel_module", 00:04:18.746 "vfu_virtio_create_fs_endpoint", 00:04:18.746 "vfu_virtio_create_scsi_endpoint", 00:04:18.746 "vfu_virtio_scsi_remove_target", 00:04:18.746 "vfu_virtio_scsi_add_target", 00:04:18.746 "vfu_virtio_create_blk_endpoint", 00:04:18.746 "vfu_virtio_delete_endpoint", 00:04:18.746 "keyring_file_remove_key", 00:04:18.746 "keyring_file_add_key", 00:04:18.746 "keyring_linux_set_options", 00:04:18.746 "fsdev_aio_delete", 00:04:18.746 "fsdev_aio_create", 00:04:18.746 "iscsi_get_histogram", 00:04:18.746 "iscsi_enable_histogram", 00:04:18.746 "iscsi_set_options", 00:04:18.746 "iscsi_get_auth_groups", 00:04:18.746 "iscsi_auth_group_remove_secret", 00:04:18.746 "iscsi_auth_group_add_secret", 00:04:18.746 "iscsi_delete_auth_group", 00:04:18.746 "iscsi_create_auth_group", 00:04:18.746 "iscsi_set_discovery_auth", 00:04:18.746 "iscsi_get_options", 00:04:18.746 "iscsi_target_node_request_logout", 00:04:18.746 "iscsi_target_node_set_redirect", 00:04:18.746 "iscsi_target_node_set_auth", 00:04:18.746 "iscsi_target_node_add_lun", 00:04:18.746 "iscsi_get_stats", 00:04:18.746 "iscsi_get_connections", 00:04:18.746 "iscsi_portal_group_set_auth", 00:04:18.746 "iscsi_start_portal_group", 00:04:18.746 "iscsi_delete_portal_group", 00:04:18.746 "iscsi_create_portal_group", 00:04:18.746 "iscsi_get_portal_groups", 00:04:18.746 "iscsi_delete_target_node", 00:04:18.746 "iscsi_target_node_remove_pg_ig_maps", 00:04:18.746 "iscsi_target_node_add_pg_ig_maps", 00:04:18.746 "iscsi_create_target_node", 00:04:18.746 "iscsi_get_target_nodes", 00:04:18.746 "iscsi_delete_initiator_group", 00:04:18.746 "iscsi_initiator_group_remove_initiators", 00:04:18.746 "iscsi_initiator_group_add_initiators", 00:04:18.746 "iscsi_create_initiator_group", 00:04:18.746 "iscsi_get_initiator_groups", 00:04:18.746 "nvmf_set_crdt", 00:04:18.746 "nvmf_set_config", 00:04:18.746 "nvmf_set_max_subsystems", 00:04:18.746 "nvmf_stop_mdns_prr", 00:04:18.746 "nvmf_publish_mdns_prr", 00:04:18.746 "nvmf_subsystem_get_listeners", 00:04:18.746 "nvmf_subsystem_get_qpairs", 00:04:18.746 "nvmf_subsystem_get_controllers", 00:04:18.746 "nvmf_get_stats", 00:04:18.746 "nvmf_get_transports", 00:04:18.746 "nvmf_create_transport", 00:04:18.746 "nvmf_get_targets", 00:04:18.746 "nvmf_delete_target", 00:04:18.746 "nvmf_create_target", 00:04:18.746 "nvmf_subsystem_allow_any_host", 00:04:18.746 "nvmf_subsystem_set_keys", 00:04:18.746 "nvmf_subsystem_remove_host", 00:04:18.746 "nvmf_subsystem_add_host", 00:04:18.746 "nvmf_ns_remove_host", 00:04:18.746 "nvmf_ns_add_host", 00:04:18.746 "nvmf_subsystem_remove_ns", 00:04:18.746 "nvmf_subsystem_set_ns_ana_group", 00:04:18.746 "nvmf_subsystem_add_ns", 00:04:18.746 "nvmf_subsystem_listener_set_ana_state", 00:04:18.746 "nvmf_discovery_get_referrals", 00:04:18.746 "nvmf_discovery_remove_referral", 00:04:18.746 "nvmf_discovery_add_referral", 00:04:18.746 "nvmf_subsystem_remove_listener", 00:04:18.746 "nvmf_subsystem_add_listener", 00:04:18.746 "nvmf_delete_subsystem", 00:04:18.746 "nvmf_create_subsystem", 00:04:18.746 "nvmf_get_subsystems", 00:04:18.746 "env_dpdk_get_mem_stats", 00:04:18.746 "nbd_get_disks", 00:04:18.746 "nbd_stop_disk", 00:04:18.746 "nbd_start_disk", 00:04:18.746 "ublk_recover_disk", 00:04:18.746 "ublk_get_disks", 00:04:18.746 "ublk_stop_disk", 00:04:18.746 "ublk_start_disk", 00:04:18.746 "ublk_destroy_target", 00:04:18.746 "ublk_create_target", 00:04:18.746 "virtio_blk_create_transport", 00:04:18.746 "virtio_blk_get_transports", 00:04:18.746 "vhost_controller_set_coalescing", 00:04:18.746 "vhost_get_controllers", 00:04:18.746 "vhost_delete_controller", 00:04:18.746 "vhost_create_blk_controller", 00:04:18.746 "vhost_scsi_controller_remove_target", 00:04:18.746 "vhost_scsi_controller_add_target", 00:04:18.746 "vhost_start_scsi_controller", 00:04:18.746 "vhost_create_scsi_controller", 00:04:18.746 "thread_set_cpumask", 00:04:18.746 "scheduler_set_options", 00:04:18.746 "framework_get_governor", 00:04:18.746 "framework_get_scheduler", 00:04:18.746 "framework_set_scheduler", 00:04:18.746 "framework_get_reactors", 00:04:18.746 "thread_get_io_channels", 00:04:18.746 "thread_get_pollers", 00:04:18.746 "thread_get_stats", 00:04:18.746 "framework_monitor_context_switch", 00:04:18.746 "spdk_kill_instance", 00:04:18.746 "log_enable_timestamps", 00:04:18.746 "log_get_flags", 00:04:18.746 "log_clear_flag", 00:04:18.746 "log_set_flag", 00:04:18.746 "log_get_level", 00:04:18.746 "log_set_level", 00:04:18.746 "log_get_print_level", 00:04:18.746 "log_set_print_level", 00:04:18.746 "framework_enable_cpumask_locks", 00:04:18.746 "framework_disable_cpumask_locks", 00:04:18.746 "framework_wait_init", 00:04:18.746 "framework_start_init", 00:04:18.746 "scsi_get_devices", 00:04:18.746 "bdev_get_histogram", 00:04:18.746 "bdev_enable_histogram", 00:04:18.746 "bdev_set_qos_limit", 00:04:18.746 "bdev_set_qd_sampling_period", 00:04:18.746 "bdev_get_bdevs", 00:04:18.746 "bdev_reset_iostat", 00:04:18.746 "bdev_get_iostat", 00:04:18.746 "bdev_examine", 00:04:18.746 "bdev_wait_for_examine", 00:04:18.746 "bdev_set_options", 00:04:18.746 "accel_get_stats", 00:04:18.746 "accel_set_options", 00:04:18.746 "accel_set_driver", 00:04:18.746 "accel_crypto_key_destroy", 00:04:18.746 "accel_crypto_keys_get", 00:04:18.746 "accel_crypto_key_create", 00:04:18.746 "accel_assign_opc", 00:04:18.746 "accel_get_module_info", 00:04:18.746 "accel_get_opc_assignments", 00:04:18.746 "vmd_rescan", 00:04:18.746 "vmd_remove_device", 00:04:18.746 "vmd_enable", 00:04:18.746 "sock_get_default_impl", 00:04:18.746 "sock_set_default_impl", 00:04:18.746 "sock_impl_set_options", 00:04:18.746 "sock_impl_get_options", 00:04:18.746 "iobuf_get_stats", 00:04:18.746 "iobuf_set_options", 00:04:18.746 "keyring_get_keys", 00:04:18.746 "vfu_tgt_set_base_path", 00:04:18.746 "framework_get_pci_devices", 00:04:18.746 "framework_get_config", 00:04:18.746 "framework_get_subsystems", 00:04:18.746 "fsdev_set_opts", 00:04:18.746 "fsdev_get_opts", 00:04:18.746 "trace_get_info", 00:04:18.746 "trace_get_tpoint_group_mask", 00:04:18.746 "trace_disable_tpoint_group", 00:04:18.746 "trace_enable_tpoint_group", 00:04:18.746 "trace_clear_tpoint_mask", 00:04:18.746 "trace_set_tpoint_mask", 00:04:18.746 "notify_get_notifications", 00:04:18.746 "notify_get_types", 00:04:18.746 "spdk_get_version", 00:04:18.746 "rpc_get_methods" 00:04:18.746 ] 00:04:18.746 11:41:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:18.746 11:41:21 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.746 11:41:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.746 11:41:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:18.746 11:41:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1690812 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1690812 ']' 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1690812 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690812 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690812' 00:04:18.747 killing process with pid 1690812 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1690812 00:04:18.747 11:41:21 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1690812 00:04:19.007 00:04:19.007 real 0m1.515s 00:04:19.007 user 0m2.735s 00:04:19.007 sys 0m0.482s 00:04:19.007 11:41:21 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.007 11:41:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.007 ************************************ 00:04:19.007 END TEST spdkcli_tcp 00:04:19.007 ************************************ 00:04:19.007 11:41:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.007 11:41:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.007 11:41:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.007 11:41:21 -- common/autotest_common.sh@10 -- # set +x 00:04:19.007 ************************************ 00:04:19.007 START TEST dpdk_mem_utility 00:04:19.007 ************************************ 00:04:19.007 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.268 * Looking for test storage... 00:04:19.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:19.268 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:19.268 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:19.268 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:19.268 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.268 11:41:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:19.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.269 --rc genhtml_branch_coverage=1 00:04:19.269 --rc genhtml_function_coverage=1 00:04:19.269 --rc genhtml_legend=1 00:04:19.269 --rc geninfo_all_blocks=1 00:04:19.269 --rc geninfo_unexecuted_blocks=1 00:04:19.269 00:04:19.269 ' 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:19.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.269 --rc genhtml_branch_coverage=1 00:04:19.269 --rc genhtml_function_coverage=1 00:04:19.269 --rc genhtml_legend=1 00:04:19.269 --rc geninfo_all_blocks=1 00:04:19.269 --rc geninfo_unexecuted_blocks=1 00:04:19.269 00:04:19.269 ' 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:19.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.269 --rc genhtml_branch_coverage=1 00:04:19.269 --rc genhtml_function_coverage=1 00:04:19.269 --rc genhtml_legend=1 00:04:19.269 --rc geninfo_all_blocks=1 00:04:19.269 --rc geninfo_unexecuted_blocks=1 00:04:19.269 00:04:19.269 ' 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:19.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.269 --rc genhtml_branch_coverage=1 00:04:19.269 --rc genhtml_function_coverage=1 00:04:19.269 --rc genhtml_legend=1 00:04:19.269 --rc geninfo_all_blocks=1 00:04:19.269 --rc geninfo_unexecuted_blocks=1 00:04:19.269 00:04:19.269 ' 00:04:19.269 11:41:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.269 11:41:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1691164 00:04:19.269 11:41:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1691164 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1691164 ']' 00:04:19.269 11:41:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:19.269 11:41:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.269 [2024-10-11 11:41:21.886073] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:19.269 [2024-10-11 11:41:21.886145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691164 ] 00:04:19.269 [2024-10-11 11:41:21.968123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.529 [2024-10-11 11:41:22.005720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.099 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.099 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:20.099 11:41:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:20.099 11:41:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:20.099 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.099 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.099 { 00:04:20.099 "filename": "/tmp/spdk_mem_dump.txt" 00:04:20.099 } 00:04:20.099 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.099 11:41:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:20.099 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:20.099 1 heaps totaling size 810.000000 MiB 00:04:20.099 size: 810.000000 MiB heap id: 0 00:04:20.099 end heaps---------- 00:04:20.099 9 mempools totaling size 595.772034 MiB 00:04:20.099 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:20.099 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:20.099 size: 92.545471 MiB name: bdev_io_1691164 00:04:20.099 size: 50.003479 MiB name: msgpool_1691164 00:04:20.099 size: 36.509338 MiB name: fsdev_io_1691164 00:04:20.099 size: 21.763794 MiB name: PDU_Pool 00:04:20.099 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:20.099 size: 4.133484 MiB name: evtpool_1691164 00:04:20.099 size: 0.026123 MiB name: Session_Pool 00:04:20.099 end mempools------- 00:04:20.099 6 memzones totaling size 4.142822 MiB 00:04:20.099 size: 1.000366 MiB name: RG_ring_0_1691164 00:04:20.099 size: 1.000366 MiB name: RG_ring_1_1691164 00:04:20.099 size: 1.000366 MiB name: RG_ring_4_1691164 00:04:20.099 size: 1.000366 MiB name: RG_ring_5_1691164 00:04:20.099 size: 0.125366 MiB name: RG_ring_2_1691164 00:04:20.099 size: 0.015991 MiB name: RG_ring_3_1691164 00:04:20.099 end memzones------- 00:04:20.099 11:41:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:20.099 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:20.099 list of free elements. size: 10.862488 MiB 00:04:20.099 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:20.099 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:20.099 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:20.099 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:20.099 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:20.099 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:20.099 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:20.099 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:20.099 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:20.099 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:20.099 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:20.099 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:20.099 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:20.099 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:20.100 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:20.100 list of standard malloc elements. size: 199.218628 MiB 00:04:20.100 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:20.100 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:20.100 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:20.100 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:20.100 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:20.100 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:20.100 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:20.100 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:20.100 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:20.100 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:20.100 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:20.100 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:20.100 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:20.100 list of memzone associated elements. size: 599.918884 MiB 00:04:20.100 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:20.100 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:20.100 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:20.100 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:20.100 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:20.100 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1691164_0 00:04:20.100 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:20.100 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1691164_0 00:04:20.100 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:20.100 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1691164_0 00:04:20.100 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:20.100 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:20.100 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:20.100 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:20.100 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:20.100 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1691164_0 00:04:20.100 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:20.100 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1691164 00:04:20.100 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:20.100 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1691164 00:04:20.100 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:20.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:20.100 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:20.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:20.100 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:20.100 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:20.100 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:20.100 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:20.100 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:20.100 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1691164 00:04:20.100 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:20.100 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1691164 00:04:20.100 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:20.100 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1691164 00:04:20.100 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:20.100 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1691164 00:04:20.100 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:20.100 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1691164 00:04:20.100 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:20.100 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1691164 00:04:20.100 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:20.100 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:20.100 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:20.100 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:20.100 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:20.100 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:20.100 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:20.100 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1691164 00:04:20.100 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:20.100 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1691164 00:04:20.100 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:20.100 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:20.100 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:20.100 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:20.100 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:20.100 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1691164 00:04:20.100 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:20.100 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:20.100 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:20.100 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1691164 00:04:20.100 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:20.100 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1691164 00:04:20.100 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:20.100 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1691164 00:04:20.100 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:20.100 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:20.100 11:41:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:20.100 11:41:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1691164 00:04:20.100 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1691164 ']' 00:04:20.100 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1691164 00:04:20.100 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:20.100 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.100 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1691164 00:04:20.360 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.360 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.360 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1691164' 00:04:20.360 killing process with pid 1691164 00:04:20.360 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1691164 00:04:20.360 11:41:22 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1691164 00:04:20.360 00:04:20.360 real 0m1.381s 00:04:20.360 user 0m1.435s 00:04:20.360 sys 0m0.413s 00:04:20.360 11:41:23 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.360 11:41:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.360 ************************************ 00:04:20.360 END TEST dpdk_mem_utility 00:04:20.360 ************************************ 00:04:20.360 11:41:23 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.360 11:41:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.360 11:41:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.360 11:41:23 -- common/autotest_common.sh@10 -- # set +x 00:04:20.622 ************************************ 00:04:20.622 START TEST event 00:04:20.622 ************************************ 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.622 * Looking for test storage... 00:04:20.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:20.622 11:41:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.622 11:41:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.622 11:41:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.622 11:41:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.622 11:41:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.622 11:41:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.622 11:41:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.622 11:41:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.622 11:41:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.622 11:41:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.622 11:41:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.622 11:41:23 event -- scripts/common.sh@344 -- # case "$op" in 00:04:20.622 11:41:23 event -- scripts/common.sh@345 -- # : 1 00:04:20.622 11:41:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.622 11:41:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.622 11:41:23 event -- scripts/common.sh@365 -- # decimal 1 00:04:20.622 11:41:23 event -- scripts/common.sh@353 -- # local d=1 00:04:20.622 11:41:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.622 11:41:23 event -- scripts/common.sh@355 -- # echo 1 00:04:20.622 11:41:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.622 11:41:23 event -- scripts/common.sh@366 -- # decimal 2 00:04:20.622 11:41:23 event -- scripts/common.sh@353 -- # local d=2 00:04:20.622 11:41:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.622 11:41:23 event -- scripts/common.sh@355 -- # echo 2 00:04:20.622 11:41:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.622 11:41:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.622 11:41:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.622 11:41:23 event -- scripts/common.sh@368 -- # return 0 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.622 --rc genhtml_branch_coverage=1 00:04:20.622 --rc genhtml_function_coverage=1 00:04:20.622 --rc genhtml_legend=1 00:04:20.622 --rc geninfo_all_blocks=1 00:04:20.622 --rc geninfo_unexecuted_blocks=1 00:04:20.622 00:04:20.622 ' 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.622 --rc genhtml_branch_coverage=1 00:04:20.622 --rc genhtml_function_coverage=1 00:04:20.622 --rc genhtml_legend=1 00:04:20.622 --rc geninfo_all_blocks=1 00:04:20.622 --rc geninfo_unexecuted_blocks=1 00:04:20.622 00:04:20.622 ' 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.622 --rc genhtml_branch_coverage=1 00:04:20.622 --rc genhtml_function_coverage=1 00:04:20.622 --rc genhtml_legend=1 00:04:20.622 --rc geninfo_all_blocks=1 00:04:20.622 --rc geninfo_unexecuted_blocks=1 00:04:20.622 00:04:20.622 ' 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.622 --rc genhtml_branch_coverage=1 00:04:20.622 --rc genhtml_function_coverage=1 00:04:20.622 --rc genhtml_legend=1 00:04:20.622 --rc geninfo_all_blocks=1 00:04:20.622 --rc geninfo_unexecuted_blocks=1 00:04:20.622 00:04:20.622 ' 00:04:20.622 11:41:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:20.622 11:41:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:20.622 11:41:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:20.622 11:41:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.622 11:41:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.882 ************************************ 00:04:20.882 START TEST event_perf 00:04:20.882 ************************************ 00:04:20.882 11:41:23 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.882 Running I/O for 1 seconds...[2024-10-11 11:41:23.358208] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:20.882 [2024-10-11 11:41:23.358313] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691479 ] 00:04:20.882 [2024-10-11 11:41:23.444796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.882 [2024-10-11 11:41:23.490809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.882 [2024-10-11 11:41:23.490966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.882 [2024-10-11 11:41:23.491121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.882 [2024-10-11 11:41:23.491121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.824 Running I/O for 1 seconds... 00:04:21.824 lcore 0: 180426 00:04:21.824 lcore 1: 180428 00:04:21.824 lcore 2: 180427 00:04:21.824 lcore 3: 180425 00:04:21.824 done. 00:04:21.824 00:04:21.824 real 0m1.183s 00:04:21.824 user 0m4.097s 00:04:21.824 sys 0m0.084s 00:04:21.824 11:41:24 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.824 11:41:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:21.824 ************************************ 00:04:21.824 END TEST event_perf 00:04:21.824 ************************************ 00:04:22.085 11:41:24 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:22.085 11:41:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:22.085 11:41:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.085 11:41:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.085 ************************************ 00:04:22.085 START TEST event_reactor 00:04:22.085 ************************************ 00:04:22.085 11:41:24 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:22.085 [2024-10-11 11:41:24.617147] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:22.085 [2024-10-11 11:41:24.617249] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691701 ] 00:04:22.085 [2024-10-11 11:41:24.697085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.085 [2024-10-11 11:41:24.729539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.470 test_start 00:04:23.470 oneshot 00:04:23.470 tick 100 00:04:23.470 tick 100 00:04:23.470 tick 250 00:04:23.470 tick 100 00:04:23.470 tick 100 00:04:23.470 tick 250 00:04:23.470 tick 100 00:04:23.470 tick 500 00:04:23.470 tick 100 00:04:23.470 tick 100 00:04:23.470 tick 250 00:04:23.470 tick 100 00:04:23.470 tick 100 00:04:23.470 test_end 00:04:23.470 00:04:23.470 real 0m1.158s 00:04:23.470 user 0m1.080s 00:04:23.470 sys 0m0.075s 00:04:23.470 11:41:25 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.470 11:41:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:23.470 ************************************ 00:04:23.470 END TEST event_reactor 00:04:23.470 ************************************ 00:04:23.470 11:41:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.470 11:41:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:23.470 11:41:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.470 11:41:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.470 ************************************ 00:04:23.470 START TEST event_reactor_perf 00:04:23.470 ************************************ 00:04:23.470 11:41:25 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.470 [2024-10-11 11:41:25.853501] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:23.470 [2024-10-11 11:41:25.853606] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692049 ] 00:04:23.470 [2024-10-11 11:41:25.932007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.470 [2024-10-11 11:41:25.968737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.409 test_start 00:04:24.409 test_end 00:04:24.409 Performance: 541798 events per second 00:04:24.410 00:04:24.410 real 0m1.162s 00:04:24.410 user 0m1.079s 00:04:24.410 sys 0m0.079s 00:04:24.410 11:41:26 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.410 11:41:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:24.410 ************************************ 00:04:24.410 END TEST event_reactor_perf 00:04:24.410 ************************************ 00:04:24.410 11:41:27 event -- event/event.sh@49 -- # uname -s 00:04:24.410 11:41:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:24.410 11:41:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:24.410 11:41:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.410 11:41:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.410 11:41:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.410 ************************************ 00:04:24.410 START TEST event_scheduler 00:04:24.410 ************************************ 00:04:24.410 11:41:27 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:24.671 * Looking for test storage... 00:04:24.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:24.671 11:41:27 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:24.671 11:41:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:24.671 11:41:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:24.671 11:41:27 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.671 11:41:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:24.671 11:41:27 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.671 11:41:27 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:24.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.671 --rc genhtml_branch_coverage=1 00:04:24.671 --rc genhtml_function_coverage=1 00:04:24.671 --rc genhtml_legend=1 00:04:24.671 --rc geninfo_all_blocks=1 00:04:24.671 --rc geninfo_unexecuted_blocks=1 00:04:24.671 00:04:24.671 ' 00:04:24.671 11:41:27 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:24.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.671 --rc genhtml_branch_coverage=1 00:04:24.671 --rc genhtml_function_coverage=1 00:04:24.671 --rc genhtml_legend=1 00:04:24.671 --rc geninfo_all_blocks=1 00:04:24.671 --rc geninfo_unexecuted_blocks=1 00:04:24.672 00:04:24.672 ' 00:04:24.672 11:41:27 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:24.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.672 --rc genhtml_branch_coverage=1 00:04:24.672 --rc genhtml_function_coverage=1 00:04:24.672 --rc genhtml_legend=1 00:04:24.672 --rc geninfo_all_blocks=1 00:04:24.672 --rc geninfo_unexecuted_blocks=1 00:04:24.672 00:04:24.672 ' 00:04:24.672 11:41:27 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:24.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.672 --rc genhtml_branch_coverage=1 00:04:24.672 --rc genhtml_function_coverage=1 00:04:24.672 --rc genhtml_legend=1 00:04:24.672 --rc geninfo_all_blocks=1 00:04:24.672 --rc geninfo_unexecuted_blocks=1 00:04:24.672 00:04:24.672 ' 00:04:24.672 11:41:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:24.672 11:41:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1692439 00:04:24.672 11:41:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.672 11:41:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1692439 00:04:24.672 11:41:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:24.672 11:41:27 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1692439 ']' 00:04:24.672 11:41:27 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.672 11:41:27 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.672 11:41:27 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.672 11:41:27 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.672 11:41:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.672 [2024-10-11 11:41:27.332831] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:24.672 [2024-10-11 11:41:27.332896] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692439 ] 00:04:24.931 [2024-10-11 11:41:27.415118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:24.931 [2024-10-11 11:41:27.471376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.931 [2024-10-11 11:41:27.471541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.931 [2024-10-11 11:41:27.471704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:24.931 [2024-10-11 11:41:27.471704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.503 11:41:28 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.503 11:41:28 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:25.503 11:41:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:25.503 11:41:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.503 11:41:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.503 [2024-10-11 11:41:28.158161] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:25.503 [2024-10-11 11:41:28.158181] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:25.503 [2024-10-11 11:41:28.158191] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:25.503 [2024-10-11 11:41:28.158197] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:25.503 [2024-10-11 11:41:28.158203] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:25.503 11:41:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.503 11:41:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:25.503 11:41:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.503 11:41:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 [2024-10-11 11:41:28.224663] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:25.764 11:41:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:25.764 11:41:28 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.764 11:41:28 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 ************************************ 00:04:25.764 START TEST scheduler_create_thread 00:04:25.764 ************************************ 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 2 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 3 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 4 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 5 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 6 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 7 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 8 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.764 9 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.764 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.335 10 00:04:26.335 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.335 11:41:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:26.335 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.335 11:41:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.717 11:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.717 11:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:27.717 11:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:27.717 11:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.717 11:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.287 11:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.287 11:41:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:28.287 11:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.287 11:41:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.228 11:41:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.228 11:41:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:29.228 11:41:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:29.228 11:41:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.228 11:41:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.798 11:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.798 00:04:29.798 real 0m4.226s 00:04:29.798 user 0m0.021s 00:04:29.798 sys 0m0.011s 00:04:29.798 11:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.798 11:41:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.798 ************************************ 00:04:29.798 END TEST scheduler_create_thread 00:04:29.798 ************************************ 00:04:30.059 11:41:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:30.059 11:41:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1692439 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1692439 ']' 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1692439 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1692439 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1692439' 00:04:30.059 killing process with pid 1692439 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1692439 00:04:30.059 11:41:32 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1692439 00:04:30.320 [2024-10-11 11:41:32.770429] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:30.320 00:04:30.320 real 0m5.848s 00:04:30.320 user 0m12.956s 00:04:30.320 sys 0m0.424s 00:04:30.320 11:41:32 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.320 11:41:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.320 ************************************ 00:04:30.320 END TEST event_scheduler 00:04:30.320 ************************************ 00:04:30.320 11:41:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:30.320 11:41:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:30.320 11:41:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.320 11:41:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.320 11:41:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.320 ************************************ 00:04:30.320 START TEST app_repeat 00:04:30.320 ************************************ 00:04:30.320 11:41:33 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1693508 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1693508' 00:04:30.320 Process app_repeat pid: 1693508 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:30.320 spdk_app_start Round 0 00:04:30.320 11:41:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1693508 /var/tmp/spdk-nbd.sock 00:04:30.320 11:41:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1693508 ']' 00:04:30.320 11:41:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.320 11:41:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.320 11:41:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.320 11:41:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.320 11:41:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.581 [2024-10-11 11:41:33.048808] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:30.581 [2024-10-11 11:41:33.048880] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693508 ] 00:04:30.581 [2024-10-11 11:41:33.131026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.581 [2024-10-11 11:41:33.167093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.581 [2024-10-11 11:41:33.167118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.581 11:41:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.581 11:41:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:30.581 11:41:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.842 Malloc0 00:04:30.842 11:41:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.102 Malloc1 00:04:31.102 11:41:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.102 /dev/nbd0 00:04:31.102 11:41:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.363 11:41:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.363 1+0 records in 00:04:31.363 1+0 records out 00:04:31.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276429 s, 14.8 MB/s 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:31.363 11:41:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:31.363 11:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.363 11:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.363 11:41:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.363 /dev/nbd1 00:04:31.363 11:41:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.363 11:41:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.363 1+0 records in 00:04:31.363 1+0 records out 00:04:31.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291867 s, 14.0 MB/s 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:31.363 11:41:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:31.363 11:41:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.363 11:41:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.363 11:41:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.363 11:41:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.363 11:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.624 { 00:04:31.624 "nbd_device": "/dev/nbd0", 00:04:31.624 "bdev_name": "Malloc0" 00:04:31.624 }, 00:04:31.624 { 00:04:31.624 "nbd_device": "/dev/nbd1", 00:04:31.624 "bdev_name": "Malloc1" 00:04:31.624 } 00:04:31.624 ]' 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.624 { 00:04:31.624 "nbd_device": "/dev/nbd0", 00:04:31.624 "bdev_name": "Malloc0" 00:04:31.624 }, 00:04:31.624 { 00:04:31.624 "nbd_device": "/dev/nbd1", 00:04:31.624 "bdev_name": "Malloc1" 00:04:31.624 } 00:04:31.624 ]' 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.624 /dev/nbd1' 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.624 /dev/nbd1' 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.624 256+0 records in 00:04:31.624 256+0 records out 00:04:31.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00298736 s, 351 MB/s 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.624 256+0 records in 00:04:31.624 256+0 records out 00:04:31.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119692 s, 87.6 MB/s 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.624 11:41:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.884 256+0 records in 00:04:31.884 256+0 records out 00:04:31.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131722 s, 79.6 MB/s 00:04:31.884 11:41:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.884 11:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.884 11:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.884 11:41:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.884 11:41:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.884 11:41:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.884 11:41:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.884 11:41:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.885 11:41:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.145 11:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.405 11:41:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.405 11:41:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.665 11:41:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:32.665 [2024-10-11 11:41:35.246845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.665 [2024-10-11 11:41:35.277542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.665 [2024-10-11 11:41:35.277542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.665 [2024-10-11 11:41:35.306602] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:32.665 [2024-10-11 11:41:35.306634] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.961 11:41:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:35.961 11:41:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:35.961 spdk_app_start Round 1 00:04:35.961 11:41:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1693508 /var/tmp/spdk-nbd.sock 00:04:35.961 11:41:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1693508 ']' 00:04:35.961 11:41:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.961 11:41:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.961 11:41:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.961 11:41:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.961 11:41:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.961 11:41:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.961 11:41:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:35.961 11:41:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.961 Malloc0 00:04:35.961 11:41:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.221 Malloc1 00:04:36.221 11:41:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.221 11:41:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:36.221 /dev/nbd0 00:04:36.482 11:41:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:36.482 11:41:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.482 1+0 records in 00:04:36.482 1+0 records out 00:04:36.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270328 s, 15.2 MB/s 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:36.482 11:41:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:36.482 11:41:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.482 11:41:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.482 11:41:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.482 /dev/nbd1 00:04:36.482 11:41:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.482 11:41:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.482 1+0 records in 00:04:36.482 1+0 records out 00:04:36.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030702 s, 13.3 MB/s 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:36.482 11:41:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:36.482 11:41:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.482 11:41:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.743 { 00:04:36.743 "nbd_device": "/dev/nbd0", 00:04:36.743 "bdev_name": "Malloc0" 00:04:36.743 }, 00:04:36.743 { 00:04:36.743 "nbd_device": "/dev/nbd1", 00:04:36.743 "bdev_name": "Malloc1" 00:04:36.743 } 00:04:36.743 ]' 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.743 { 00:04:36.743 "nbd_device": "/dev/nbd0", 00:04:36.743 "bdev_name": "Malloc0" 00:04:36.743 }, 00:04:36.743 { 00:04:36.743 "nbd_device": "/dev/nbd1", 00:04:36.743 "bdev_name": "Malloc1" 00:04:36.743 } 00:04:36.743 ]' 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.743 /dev/nbd1' 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.743 /dev/nbd1' 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.743 256+0 records in 00:04:36.743 256+0 records out 00:04:36.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122574 s, 85.5 MB/s 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.743 11:41:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:37.004 256+0 records in 00:04:37.004 256+0 records out 00:04:37.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120503 s, 87.0 MB/s 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:37.004 256+0 records in 00:04:37.004 256+0 records out 00:04:37.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013079 s, 80.2 MB/s 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:37.004 11:41:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.265 11:41:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.526 11:41:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.526 11:41:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.786 11:41:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:37.786 [2024-10-11 11:41:40.383814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.786 [2024-10-11 11:41:40.414863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.786 [2024-10-11 11:41:40.414863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.786 [2024-10-11 11:41:40.444427] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.786 [2024-10-11 11:41:40.444459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:41.152 11:41:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:41.152 11:41:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:41.152 spdk_app_start Round 2 00:04:41.152 11:41:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1693508 /var/tmp/spdk-nbd.sock 00:04:41.152 11:41:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1693508 ']' 00:04:41.152 11:41:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.152 11:41:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.152 11:41:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.152 11:41:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.152 11:41:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.152 11:41:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.152 11:41:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:41.152 11:41:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.152 Malloc0 00:04:41.152 11:41:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.152 Malloc1 00:04:41.413 11:41:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.413 11:41:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:41.413 /dev/nbd0 00:04:41.413 11:41:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:41.413 11:41:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.413 1+0 records in 00:04:41.413 1+0 records out 00:04:41.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.1082e-05 s, 50.5 MB/s 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:41.413 11:41:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:41.413 11:41:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.413 11:41:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.413 11:41:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.674 /dev/nbd1 00:04:41.674 11:41:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.674 11:41:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.674 1+0 records in 00:04:41.674 1+0 records out 00:04:41.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294063 s, 13.9 MB/s 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:41.674 11:41:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:41.674 11:41:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.674 11:41:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.674 11:41:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.674 11:41:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.674 11:41:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.935 { 00:04:41.935 "nbd_device": "/dev/nbd0", 00:04:41.935 "bdev_name": "Malloc0" 00:04:41.935 }, 00:04:41.935 { 00:04:41.935 "nbd_device": "/dev/nbd1", 00:04:41.935 "bdev_name": "Malloc1" 00:04:41.935 } 00:04:41.935 ]' 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.935 { 00:04:41.935 "nbd_device": "/dev/nbd0", 00:04:41.935 "bdev_name": "Malloc0" 00:04:41.935 }, 00:04:41.935 { 00:04:41.935 "nbd_device": "/dev/nbd1", 00:04:41.935 "bdev_name": "Malloc1" 00:04:41.935 } 00:04:41.935 ]' 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.935 /dev/nbd1' 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.935 /dev/nbd1' 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.935 256+0 records in 00:04:41.935 256+0 records out 00:04:41.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122399 s, 85.7 MB/s 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.935 256+0 records in 00:04:41.935 256+0 records out 00:04:41.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119125 s, 88.0 MB/s 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.935 256+0 records in 00:04:41.935 256+0 records out 00:04:41.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012736 s, 82.3 MB/s 00:04:41.935 11:41:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.936 11:41:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.197 11:41:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.458 11:41:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.458 11:41:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.458 11:41:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.458 11:41:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.458 11:41:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.458 11:41:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.458 11:41:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.458 11:41:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.458 11:41:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.458 11:41:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.458 11:41:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.719 11:41:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.719 11:41:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.719 11:41:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.980 [2024-10-11 11:41:45.515233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.980 [2024-10-11 11:41:45.545764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.980 [2024-10-11 11:41:45.545764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.980 [2024-10-11 11:41:45.574868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.980 [2024-10-11 11:41:45.574898] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:46.283 11:41:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1693508 /var/tmp/spdk-nbd.sock 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1693508 ']' 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:46.283 11:41:48 event.app_repeat -- event/event.sh@39 -- # killprocess 1693508 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1693508 ']' 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1693508 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693508 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693508' 00:04:46.283 killing process with pid 1693508 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1693508 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1693508 00:04:46.283 spdk_app_start is called in Round 0. 00:04:46.283 Shutdown signal received, stop current app iteration 00:04:46.283 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:04:46.283 spdk_app_start is called in Round 1. 00:04:46.283 Shutdown signal received, stop current app iteration 00:04:46.283 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:04:46.283 spdk_app_start is called in Round 2. 00:04:46.283 Shutdown signal received, stop current app iteration 00:04:46.283 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:04:46.283 spdk_app_start is called in Round 3. 00:04:46.283 Shutdown signal received, stop current app iteration 00:04:46.283 11:41:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:46.283 11:41:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:46.283 00:04:46.283 real 0m15.713s 00:04:46.283 user 0m34.486s 00:04:46.283 sys 0m2.276s 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.283 11:41:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.283 ************************************ 00:04:46.283 END TEST app_repeat 00:04:46.283 ************************************ 00:04:46.283 11:41:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:46.283 11:41:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:46.283 11:41:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.283 11:41:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.283 11:41:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.283 ************************************ 00:04:46.283 START TEST cpu_locks 00:04:46.283 ************************************ 00:04:46.283 11:41:48 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:46.283 * Looking for test storage... 00:04:46.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:46.283 11:41:48 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:46.283 11:41:48 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:46.283 11:41:48 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:46.283 11:41:48 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:46.283 11:41:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.283 11:41:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.283 11:41:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.283 11:41:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.283 11:41:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.283 11:41:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.544 11:41:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:46.544 11:41:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.544 11:41:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.544 11:41:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.544 11:41:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:46.544 11:41:49 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.544 11:41:49 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.544 --rc genhtml_branch_coverage=1 00:04:46.544 --rc genhtml_function_coverage=1 00:04:46.544 --rc genhtml_legend=1 00:04:46.544 --rc geninfo_all_blocks=1 00:04:46.544 --rc geninfo_unexecuted_blocks=1 00:04:46.544 00:04:46.544 ' 00:04:46.544 11:41:49 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.544 --rc genhtml_branch_coverage=1 00:04:46.544 --rc genhtml_function_coverage=1 00:04:46.544 --rc genhtml_legend=1 00:04:46.544 --rc geninfo_all_blocks=1 00:04:46.544 --rc geninfo_unexecuted_blocks=1 00:04:46.544 00:04:46.544 ' 00:04:46.544 11:41:49 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.544 --rc genhtml_branch_coverage=1 00:04:46.544 --rc genhtml_function_coverage=1 00:04:46.544 --rc genhtml_legend=1 00:04:46.544 --rc geninfo_all_blocks=1 00:04:46.544 --rc geninfo_unexecuted_blocks=1 00:04:46.544 00:04:46.544 ' 00:04:46.544 11:41:49 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:46.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.544 --rc genhtml_branch_coverage=1 00:04:46.544 --rc genhtml_function_coverage=1 00:04:46.544 --rc genhtml_legend=1 00:04:46.544 --rc geninfo_all_blocks=1 00:04:46.544 --rc geninfo_unexecuted_blocks=1 00:04:46.544 00:04:46.544 ' 00:04:46.544 11:41:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:46.544 11:41:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:46.544 11:41:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:46.544 11:41:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:46.544 11:41:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.544 11:41:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.544 11:41:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.544 ************************************ 00:04:46.544 START TEST default_locks 00:04:46.544 ************************************ 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1697096 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1697096 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1697096 ']' 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.544 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.544 [2024-10-11 11:41:49.105757] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:46.544 [2024-10-11 11:41:49.105821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697096 ] 00:04:46.544 [2024-10-11 11:41:49.186724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.544 [2024-10-11 11:41:49.222690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.484 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.484 11:41:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:47.484 11:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1697096 00:04:47.484 11:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1697096 00:04:47.484 11:41:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.744 lslocks: write error 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1697096 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1697096 ']' 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1697096 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697096 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697096' 00:04:47.744 killing process with pid 1697096 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1697096 00:04:47.744 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1697096 00:04:48.004 11:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1697096 00:04:48.004 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:48.004 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1697096 00:04:48.004 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:48.004 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.004 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:48.004 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1697096 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1697096 ']' 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1697096) - No such process 00:04:48.005 ERROR: process (pid: 1697096) is no longer running 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:48.005 00:04:48.005 real 0m1.504s 00:04:48.005 user 0m1.629s 00:04:48.005 sys 0m0.522s 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.005 11:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.005 ************************************ 00:04:48.005 END TEST default_locks 00:04:48.005 ************************************ 00:04:48.005 11:41:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:48.005 11:41:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.005 11:41:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.005 11:41:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.005 ************************************ 00:04:48.005 START TEST default_locks_via_rpc 00:04:48.005 ************************************ 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1697439 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1697439 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1697439 ']' 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.005 11:41:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.005 [2024-10-11 11:41:50.695391] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:48.005 [2024-10-11 11:41:50.695451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697439 ] 00:04:48.265 [2024-10-11 11:41:50.772405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.265 [2024-10-11 11:41:50.804312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1697439 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1697439 00:04:48.835 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1697439 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1697439 ']' 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1697439 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697439 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697439' 00:04:49.407 killing process with pid 1697439 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1697439 00:04:49.407 11:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1697439 00:04:49.407 00:04:49.407 real 0m1.464s 00:04:49.407 user 0m1.599s 00:04:49.407 sys 0m0.491s 00:04:49.407 11:41:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.407 11:41:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.407 ************************************ 00:04:49.407 END TEST default_locks_via_rpc 00:04:49.407 ************************************ 00:04:49.668 11:41:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:49.668 11:41:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.668 11:41:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.668 11:41:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.668 ************************************ 00:04:49.668 START TEST non_locking_app_on_locked_coremask 00:04:49.668 ************************************ 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1697726 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1697726 /var/tmp/spdk.sock 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1697726 ']' 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.668 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.668 [2024-10-11 11:41:52.221432] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:49.668 [2024-10-11 11:41:52.221471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697726 ] 00:04:49.668 [2024-10-11 11:41:52.289904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.668 [2024-10-11 11:41:52.320124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.609 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.609 11:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1697844 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1697844 /var/tmp/spdk2.sock 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1697844 ']' 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.609 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.609 [2024-10-11 11:41:53.059404] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:50.609 [2024-10-11 11:41:53.059456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697844 ] 00:04:50.609 [2024-10-11 11:41:53.130510] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.609 [2024-10-11 11:41:53.130531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.609 [2024-10-11 11:41:53.193000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.180 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.180 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:51.180 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1697726 00:04:51.180 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.180 11:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1697726 00:04:51.752 lslocks: write error 00:04:51.752 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1697726 00:04:51.752 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1697726 ']' 00:04:51.752 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1697726 00:04:51.752 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:51.752 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.752 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697726 00:04:52.013 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.013 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.013 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697726' 00:04:52.013 killing process with pid 1697726 00:04:52.013 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1697726 00:04:52.013 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1697726 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1697844 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1697844 ']' 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1697844 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697844 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697844' 00:04:52.275 killing process with pid 1697844 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1697844 00:04:52.275 11:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1697844 00:04:52.536 00:04:52.536 real 0m2.941s 00:04:52.536 user 0m3.280s 00:04:52.536 sys 0m0.923s 00:04:52.536 11:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.536 11:41:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.536 ************************************ 00:04:52.536 END TEST non_locking_app_on_locked_coremask 00:04:52.536 ************************************ 00:04:52.536 11:41:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:52.536 11:41:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.536 11:41:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.536 11:41:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.536 ************************************ 00:04:52.536 START TEST locking_app_on_unlocked_coremask 00:04:52.536 ************************************ 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1698259 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1698259 /var/tmp/spdk.sock 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1698259 ']' 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.536 11:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.536 [2024-10-11 11:41:55.239953] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:52.536 [2024-10-11 11:41:55.240006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698259 ] 00:04:52.798 [2024-10-11 11:41:55.318864] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.798 [2024-10-11 11:41:55.318893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.798 [2024-10-11 11:41:55.356815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1698550 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1698550 /var/tmp/spdk2.sock 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1698550 ']' 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.369 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.629 [2024-10-11 11:41:56.077736] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:53.629 [2024-10-11 11:41:56.077787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698550 ] 00:04:53.629 [2024-10-11 11:41:56.150172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.629 [2024-10-11 11:41:56.212503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.200 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.200 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:54.200 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1698550 00:04:54.200 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.200 11:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1698550 00:04:54.772 lslocks: write error 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1698259 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1698259 ']' 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1698259 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1698259 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1698259' 00:04:54.772 killing process with pid 1698259 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1698259 00:04:54.772 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1698259 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1698550 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1698550 ']' 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1698550 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1698550 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1698550' 00:04:55.033 killing process with pid 1698550 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1698550 00:04:55.033 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1698550 00:04:55.294 00:04:55.294 real 0m2.721s 00:04:55.294 user 0m3.051s 00:04:55.294 sys 0m0.824s 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.294 ************************************ 00:04:55.294 END TEST locking_app_on_unlocked_coremask 00:04:55.294 ************************************ 00:04:55.294 11:41:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:55.294 11:41:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.294 11:41:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.294 11:41:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.294 ************************************ 00:04:55.294 START TEST locking_app_on_locked_coremask 00:04:55.294 ************************************ 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1698921 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1698921 /var/tmp/spdk.sock 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1698921 ']' 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.294 11:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.555 [2024-10-11 11:41:58.037961] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:55.555 [2024-10-11 11:41:58.038011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698921 ] 00:04:55.555 [2024-10-11 11:41:58.114542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.555 [2024-10-11 11:41:58.145203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.127 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.127 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1699105 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1699105 /var/tmp/spdk2.sock 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1699105 /var/tmp/spdk2.sock 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1699105 /var/tmp/spdk2.sock 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1699105 ']' 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.387 11:41:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.387 [2024-10-11 11:41:58.889079] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:56.387 [2024-10-11 11:41:58.889137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699105 ] 00:04:56.387 [2024-10-11 11:41:58.965559] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1698921 has claimed it. 00:04:56.387 [2024-10-11 11:41:58.965597] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:56.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1699105) - No such process 00:04:56.999 ERROR: process (pid: 1699105) is no longer running 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1698921 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1698921 00:04:56.999 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.260 lslocks: write error 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1698921 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1698921 ']' 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1698921 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1698921 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1698921' 00:04:57.260 killing process with pid 1698921 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1698921 00:04:57.260 11:41:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1698921 00:04:57.520 00:04:57.520 real 0m2.170s 00:04:57.520 user 0m2.460s 00:04:57.520 sys 0m0.612s 00:04:57.520 11:42:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.520 11:42:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.520 ************************************ 00:04:57.520 END TEST locking_app_on_locked_coremask 00:04:57.520 ************************************ 00:04:57.520 11:42:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:57.520 11:42:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.520 11:42:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.520 11:42:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 ************************************ 00:04:57.781 START TEST locking_overlapped_coremask 00:04:57.781 ************************************ 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1699343 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1699343 /var/tmp/spdk.sock 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1699343 ']' 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.781 11:42:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 [2024-10-11 11:42:00.285729] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:57.781 [2024-10-11 11:42:00.285786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699343 ] 00:04:57.781 [2024-10-11 11:42:00.367031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.781 [2024-10-11 11:42:00.411191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.781 [2024-10-11 11:42:00.411494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.781 [2024-10-11 11:42:00.411494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1699694 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1699694 /var/tmp/spdk2.sock 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1699694 /var/tmp/spdk2.sock 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1699694 /var/tmp/spdk2.sock 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1699694 ']' 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.543 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.543 [2024-10-11 11:42:01.145220] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:58.543 [2024-10-11 11:42:01.145277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699694 ] 00:04:58.543 [2024-10-11 11:42:01.238958] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1699343 has claimed it. 00:04:58.543 [2024-10-11 11:42:01.239006] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1699694) - No such process 00:04:59.114 ERROR: process (pid: 1699694) is no longer running 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1699343 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1699343 ']' 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1699343 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.114 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1699343 00:04:59.374 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.374 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.374 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1699343' 00:04:59.374 killing process with pid 1699343 00:04:59.374 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1699343 00:04:59.374 11:42:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1699343 00:04:59.374 00:04:59.374 real 0m1.789s 00:04:59.374 user 0m5.142s 00:04:59.374 sys 0m0.426s 00:04:59.374 11:42:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.374 11:42:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.374 ************************************ 00:04:59.374 END TEST locking_overlapped_coremask 00:04:59.374 ************************************ 00:04:59.374 11:42:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:59.374 11:42:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.374 11:42:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.374 11:42:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.634 ************************************ 00:04:59.634 START TEST locking_overlapped_coremask_via_rpc 00:04:59.634 ************************************ 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1699895 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1699895 /var/tmp/spdk.sock 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1699895 ']' 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.634 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.634 [2024-10-11 11:42:02.146511] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:59.635 [2024-10-11 11:42:02.146563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699895 ] 00:04:59.635 [2024-10-11 11:42:02.223361] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.635 [2024-10-11 11:42:02.223386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.635 [2024-10-11 11:42:02.257425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.635 [2024-10-11 11:42:02.257569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.635 [2024-10-11 11:42:02.257570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1700116 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1700116 /var/tmp/spdk2.sock 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1700116 ']' 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.575 11:42:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:00.575 [2024-10-11 11:42:02.990676] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:00.575 [2024-10-11 11:42:02.990733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700116 ] 00:05:00.575 [2024-10-11 11:42:03.084614] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.575 [2024-10-11 11:42:03.084643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.575 [2024-10-11 11:42:03.158472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.575 [2024-10-11 11:42:03.162185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.575 [2024-10-11 11:42:03.162187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.146 [2024-10-11 11:42:03.775146] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1699895 has claimed it. 00:05:01.146 request: 00:05:01.146 { 00:05:01.146 "method": "framework_enable_cpumask_locks", 00:05:01.146 "req_id": 1 00:05:01.146 } 00:05:01.146 Got JSON-RPC error response 00:05:01.146 response: 00:05:01.146 { 00:05:01.146 "code": -32603, 00:05:01.146 "message": "Failed to claim CPU core: 2" 00:05:01.146 } 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1699895 /var/tmp/spdk.sock 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1699895 ']' 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.146 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1700116 /var/tmp/spdk2.sock 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1700116 ']' 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.407 11:42:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.669 11:42:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.669 11:42:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:01.669 11:42:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:01.669 11:42:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.669 11:42:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.669 11:42:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.669 00:05:01.669 real 0m2.061s 00:05:01.669 user 0m0.824s 00:05:01.669 sys 0m0.167s 00:05:01.669 11:42:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.669 11:42:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.669 ************************************ 00:05:01.669 END TEST locking_overlapped_coremask_via_rpc 00:05:01.669 ************************************ 00:05:01.669 11:42:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:01.669 11:42:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1699895 ]] 00:05:01.669 11:42:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1699895 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1699895 ']' 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1699895 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1699895 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1699895' 00:05:01.669 killing process with pid 1699895 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1699895 00:05:01.669 11:42:04 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1699895 00:05:01.929 11:42:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1700116 ]] 00:05:01.929 11:42:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1700116 00:05:01.929 11:42:04 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1700116 ']' 00:05:01.929 11:42:04 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1700116 00:05:01.929 11:42:04 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:01.929 11:42:04 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.929 11:42:04 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1700116 00:05:01.930 11:42:04 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:01.930 11:42:04 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:01.930 11:42:04 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1700116' 00:05:01.930 killing process with pid 1700116 00:05:01.930 11:42:04 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1700116 00:05:01.930 11:42:04 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1700116 00:05:02.190 11:42:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.190 11:42:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:02.190 11:42:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1699895 ]] 00:05:02.190 11:42:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1699895 00:05:02.190 11:42:04 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1699895 ']' 00:05:02.190 11:42:04 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1699895 00:05:02.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1699895) - No such process 00:05:02.190 11:42:04 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1699895 is not found' 00:05:02.190 Process with pid 1699895 is not found 00:05:02.190 11:42:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1700116 ]] 00:05:02.190 11:42:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1700116 00:05:02.190 11:42:04 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1700116 ']' 00:05:02.190 11:42:04 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1700116 00:05:02.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1700116) - No such process 00:05:02.190 11:42:04 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1700116 is not found' 00:05:02.190 Process with pid 1700116 is not found 00:05:02.190 11:42:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.190 00:05:02.190 real 0m15.905s 00:05:02.190 user 0m27.930s 00:05:02.190 sys 0m4.939s 00:05:02.190 11:42:04 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.190 11:42:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.190 ************************************ 00:05:02.190 END TEST cpu_locks 00:05:02.190 ************************************ 00:05:02.190 00:05:02.190 real 0m41.660s 00:05:02.190 user 1m21.912s 00:05:02.190 sys 0m8.315s 00:05:02.190 11:42:04 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.190 11:42:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.190 ************************************ 00:05:02.190 END TEST event 00:05:02.190 ************************************ 00:05:02.190 11:42:04 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.190 11:42:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.190 11:42:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.190 11:42:04 -- common/autotest_common.sh@10 -- # set +x 00:05:02.190 ************************************ 00:05:02.190 START TEST thread 00:05:02.190 ************************************ 00:05:02.190 11:42:04 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.450 * Looking for test storage... 00:05:02.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:02.450 11:42:04 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:02.450 11:42:04 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:02.450 11:42:04 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:02.450 11:42:05 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:02.450 11:42:05 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.450 11:42:05 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.451 11:42:05 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.451 11:42:05 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.451 11:42:05 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.451 11:42:05 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.451 11:42:05 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.451 11:42:05 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.451 11:42:05 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.451 11:42:05 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.451 11:42:05 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.451 11:42:05 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:02.451 11:42:05 thread -- scripts/common.sh@345 -- # : 1 00:05:02.451 11:42:05 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.451 11:42:05 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.451 11:42:05 thread -- scripts/common.sh@365 -- # decimal 1 00:05:02.451 11:42:05 thread -- scripts/common.sh@353 -- # local d=1 00:05:02.451 11:42:05 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.451 11:42:05 thread -- scripts/common.sh@355 -- # echo 1 00:05:02.451 11:42:05 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.451 11:42:05 thread -- scripts/common.sh@366 -- # decimal 2 00:05:02.451 11:42:05 thread -- scripts/common.sh@353 -- # local d=2 00:05:02.451 11:42:05 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.451 11:42:05 thread -- scripts/common.sh@355 -- # echo 2 00:05:02.451 11:42:05 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.451 11:42:05 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.451 11:42:05 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.451 11:42:05 thread -- scripts/common.sh@368 -- # return 0 00:05:02.451 11:42:05 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.451 11:42:05 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.451 --rc genhtml_branch_coverage=1 00:05:02.451 --rc genhtml_function_coverage=1 00:05:02.451 --rc genhtml_legend=1 00:05:02.451 --rc geninfo_all_blocks=1 00:05:02.451 --rc geninfo_unexecuted_blocks=1 00:05:02.451 00:05:02.451 ' 00:05:02.451 11:42:05 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.451 --rc genhtml_branch_coverage=1 00:05:02.451 --rc genhtml_function_coverage=1 00:05:02.451 --rc genhtml_legend=1 00:05:02.451 --rc geninfo_all_blocks=1 00:05:02.451 --rc geninfo_unexecuted_blocks=1 00:05:02.451 00:05:02.451 ' 00:05:02.451 11:42:05 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.451 --rc genhtml_branch_coverage=1 00:05:02.451 --rc genhtml_function_coverage=1 00:05:02.451 --rc genhtml_legend=1 00:05:02.451 --rc geninfo_all_blocks=1 00:05:02.451 --rc geninfo_unexecuted_blocks=1 00:05:02.451 00:05:02.451 ' 00:05:02.451 11:42:05 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.451 --rc genhtml_branch_coverage=1 00:05:02.451 --rc genhtml_function_coverage=1 00:05:02.451 --rc genhtml_legend=1 00:05:02.451 --rc geninfo_all_blocks=1 00:05:02.451 --rc geninfo_unexecuted_blocks=1 00:05:02.451 00:05:02.451 ' 00:05:02.451 11:42:05 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.451 11:42:05 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:02.451 11:42:05 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.451 11:42:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.451 ************************************ 00:05:02.451 START TEST thread_poller_perf 00:05:02.451 ************************************ 00:05:02.451 11:42:05 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.451 [2024-10-11 11:42:05.082907] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:02.451 [2024-10-11 11:42:05.083011] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700565 ] 00:05:02.711 [2024-10-11 11:42:05.165896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.711 [2024-10-11 11:42:05.206438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.711 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:03.652 [2024-10-11T09:42:06.355Z] ====================================== 00:05:03.652 [2024-10-11T09:42:06.355Z] busy:2408318146 (cyc) 00:05:03.652 [2024-10-11T09:42:06.355Z] total_run_count: 419000 00:05:03.652 [2024-10-11T09:42:06.355Z] tsc_hz: 2400000000 (cyc) 00:05:03.652 [2024-10-11T09:42:06.355Z] ====================================== 00:05:03.652 [2024-10-11T09:42:06.355Z] poller_cost: 5747 (cyc), 2394 (nsec) 00:05:03.652 00:05:03.652 real 0m1.179s 00:05:03.652 user 0m1.092s 00:05:03.652 sys 0m0.082s 00:05:03.652 11:42:06 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.652 11:42:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.652 ************************************ 00:05:03.652 END TEST thread_poller_perf 00:05:03.652 ************************************ 00:05:03.652 11:42:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.652 11:42:06 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:03.652 11:42:06 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.652 11:42:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.652 ************************************ 00:05:03.652 START TEST thread_poller_perf 00:05:03.652 ************************************ 00:05:03.652 11:42:06 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.652 [2024-10-11 11:42:06.338215] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:03.652 [2024-10-11 11:42:06.338312] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700920 ] 00:05:03.913 [2024-10-11 11:42:06.421339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.913 [2024-10-11 11:42:06.460507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.913 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:04.853 [2024-10-11T09:42:07.556Z] ====================================== 00:05:04.853 [2024-10-11T09:42:07.556Z] busy:2401411268 (cyc) 00:05:04.853 [2024-10-11T09:42:07.556Z] total_run_count: 5562000 00:05:04.853 [2024-10-11T09:42:07.556Z] tsc_hz: 2400000000 (cyc) 00:05:04.854 [2024-10-11T09:42:07.557Z] ====================================== 00:05:04.854 [2024-10-11T09:42:07.557Z] poller_cost: 431 (cyc), 179 (nsec) 00:05:04.854 00:05:04.854 real 0m1.170s 00:05:04.854 user 0m1.081s 00:05:04.854 sys 0m0.084s 00:05:04.854 11:42:07 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.854 11:42:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:04.854 ************************************ 00:05:04.854 END TEST thread_poller_perf 00:05:04.854 ************************************ 00:05:04.854 11:42:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:04.854 00:05:04.854 real 0m2.703s 00:05:04.854 user 0m2.349s 00:05:04.854 sys 0m0.367s 00:05:04.854 11:42:07 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.854 11:42:07 thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.854 ************************************ 00:05:04.854 END TEST thread 00:05:04.854 ************************************ 00:05:05.115 11:42:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:05.115 11:42:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:05.115 11:42:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.115 11:42:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.115 11:42:07 -- common/autotest_common.sh@10 -- # set +x 00:05:05.115 ************************************ 00:05:05.115 START TEST app_cmdline 00:05:05.115 ************************************ 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:05.115 * Looking for test storage... 00:05:05.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.115 11:42:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.115 --rc genhtml_branch_coverage=1 00:05:05.115 --rc genhtml_function_coverage=1 00:05:05.115 --rc genhtml_legend=1 00:05:05.115 --rc geninfo_all_blocks=1 00:05:05.115 --rc geninfo_unexecuted_blocks=1 00:05:05.115 00:05:05.115 ' 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.115 --rc genhtml_branch_coverage=1 00:05:05.115 --rc genhtml_function_coverage=1 00:05:05.115 --rc genhtml_legend=1 00:05:05.115 --rc geninfo_all_blocks=1 00:05:05.115 --rc geninfo_unexecuted_blocks=1 00:05:05.115 00:05:05.115 ' 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.115 --rc genhtml_branch_coverage=1 00:05:05.115 --rc genhtml_function_coverage=1 00:05:05.115 --rc genhtml_legend=1 00:05:05.115 --rc geninfo_all_blocks=1 00:05:05.115 --rc geninfo_unexecuted_blocks=1 00:05:05.115 00:05:05.115 ' 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.115 --rc genhtml_branch_coverage=1 00:05:05.115 --rc genhtml_function_coverage=1 00:05:05.115 --rc genhtml_legend=1 00:05:05.115 --rc geninfo_all_blocks=1 00:05:05.115 --rc geninfo_unexecuted_blocks=1 00:05:05.115 00:05:05.115 ' 00:05:05.115 11:42:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:05.115 11:42:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1701319 00:05:05.115 11:42:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1701319 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1701319 ']' 00:05:05.115 11:42:07 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.115 11:42:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:05.376 [2024-10-11 11:42:07.863350] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:05.376 [2024-10-11 11:42:07.863419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701319 ] 00:05:05.376 [2024-10-11 11:42:07.943973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.376 [2024-10-11 11:42:07.985074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:06.316 { 00:05:06.316 "version": "SPDK v25.01-pre git sha1 5031f0f3b", 00:05:06.316 "fields": { 00:05:06.316 "major": 25, 00:05:06.316 "minor": 1, 00:05:06.316 "patch": 0, 00:05:06.316 "suffix": "-pre", 00:05:06.316 "commit": "5031f0f3b" 00:05:06.316 } 00:05:06.316 } 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:06.316 11:42:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:06.316 11:42:08 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:06.577 request: 00:05:06.577 { 00:05:06.577 "method": "env_dpdk_get_mem_stats", 00:05:06.577 "req_id": 1 00:05:06.577 } 00:05:06.577 Got JSON-RPC error response 00:05:06.577 response: 00:05:06.577 { 00:05:06.577 "code": -32601, 00:05:06.577 "message": "Method not found" 00:05:06.577 } 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.577 11:42:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1701319 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1701319 ']' 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1701319 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701319 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701319' 00:05:06.577 killing process with pid 1701319 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@969 -- # kill 1701319 00:05:06.577 11:42:09 app_cmdline -- common/autotest_common.sh@974 -- # wait 1701319 00:05:06.838 00:05:06.838 real 0m1.688s 00:05:06.838 user 0m2.017s 00:05:06.838 sys 0m0.461s 00:05:06.838 11:42:09 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.838 11:42:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:06.838 ************************************ 00:05:06.838 END TEST app_cmdline 00:05:06.838 ************************************ 00:05:06.838 11:42:09 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:06.838 11:42:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.838 11:42:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.838 11:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:06.838 ************************************ 00:05:06.838 START TEST version 00:05:06.838 ************************************ 00:05:06.838 11:42:09 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:06.838 * Looking for test storage... 00:05:06.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:06.838 11:42:09 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.838 11:42:09 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.838 11:42:09 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.099 11:42:09 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.099 11:42:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.099 11:42:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.099 11:42:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.099 11:42:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.099 11:42:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.099 11:42:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.099 11:42:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.099 11:42:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.099 11:42:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.099 11:42:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.099 11:42:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.099 11:42:09 version -- scripts/common.sh@344 -- # case "$op" in 00:05:07.099 11:42:09 version -- scripts/common.sh@345 -- # : 1 00:05:07.099 11:42:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.099 11:42:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.099 11:42:09 version -- scripts/common.sh@365 -- # decimal 1 00:05:07.099 11:42:09 version -- scripts/common.sh@353 -- # local d=1 00:05:07.099 11:42:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.099 11:42:09 version -- scripts/common.sh@355 -- # echo 1 00:05:07.099 11:42:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.099 11:42:09 version -- scripts/common.sh@366 -- # decimal 2 00:05:07.099 11:42:09 version -- scripts/common.sh@353 -- # local d=2 00:05:07.099 11:42:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.099 11:42:09 version -- scripts/common.sh@355 -- # echo 2 00:05:07.099 11:42:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.099 11:42:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.099 11:42:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.099 11:42:09 version -- scripts/common.sh@368 -- # return 0 00:05:07.099 11:42:09 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.099 11:42:09 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.099 --rc genhtml_branch_coverage=1 00:05:07.099 --rc genhtml_function_coverage=1 00:05:07.099 --rc genhtml_legend=1 00:05:07.099 --rc geninfo_all_blocks=1 00:05:07.099 --rc geninfo_unexecuted_blocks=1 00:05:07.099 00:05:07.099 ' 00:05:07.099 11:42:09 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.099 --rc genhtml_branch_coverage=1 00:05:07.099 --rc genhtml_function_coverage=1 00:05:07.099 --rc genhtml_legend=1 00:05:07.099 --rc geninfo_all_blocks=1 00:05:07.099 --rc geninfo_unexecuted_blocks=1 00:05:07.099 00:05:07.099 ' 00:05:07.099 11:42:09 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.100 --rc genhtml_branch_coverage=1 00:05:07.100 --rc genhtml_function_coverage=1 00:05:07.100 --rc genhtml_legend=1 00:05:07.100 --rc geninfo_all_blocks=1 00:05:07.100 --rc geninfo_unexecuted_blocks=1 00:05:07.100 00:05:07.100 ' 00:05:07.100 11:42:09 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.100 --rc genhtml_branch_coverage=1 00:05:07.100 --rc genhtml_function_coverage=1 00:05:07.100 --rc genhtml_legend=1 00:05:07.100 --rc geninfo_all_blocks=1 00:05:07.100 --rc geninfo_unexecuted_blocks=1 00:05:07.100 00:05:07.100 ' 00:05:07.100 11:42:09 version -- app/version.sh@17 -- # get_header_version major 00:05:07.100 11:42:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.100 11:42:09 version -- app/version.sh@14 -- # cut -f2 00:05:07.100 11:42:09 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.100 11:42:09 version -- app/version.sh@17 -- # major=25 00:05:07.100 11:42:09 version -- app/version.sh@18 -- # get_header_version minor 00:05:07.100 11:42:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.100 11:42:09 version -- app/version.sh@14 -- # cut -f2 00:05:07.100 11:42:09 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.100 11:42:09 version -- app/version.sh@18 -- # minor=1 00:05:07.100 11:42:09 version -- app/version.sh@19 -- # get_header_version patch 00:05:07.100 11:42:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.100 11:42:09 version -- app/version.sh@14 -- # cut -f2 00:05:07.100 11:42:09 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.100 11:42:09 version -- app/version.sh@19 -- # patch=0 00:05:07.100 11:42:09 version -- app/version.sh@20 -- # get_header_version suffix 00:05:07.100 11:42:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:07.100 11:42:09 version -- app/version.sh@14 -- # cut -f2 00:05:07.100 11:42:09 version -- app/version.sh@14 -- # tr -d '"' 00:05:07.100 11:42:09 version -- app/version.sh@20 -- # suffix=-pre 00:05:07.100 11:42:09 version -- app/version.sh@22 -- # version=25.1 00:05:07.100 11:42:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:07.100 11:42:09 version -- app/version.sh@28 -- # version=25.1rc0 00:05:07.100 11:42:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:07.100 11:42:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:07.100 11:42:09 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:07.100 11:42:09 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:07.100 00:05:07.100 real 0m0.279s 00:05:07.100 user 0m0.172s 00:05:07.100 sys 0m0.154s 00:05:07.100 11:42:09 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.100 11:42:09 version -- common/autotest_common.sh@10 -- # set +x 00:05:07.100 ************************************ 00:05:07.100 END TEST version 00:05:07.100 ************************************ 00:05:07.100 11:42:09 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:07.100 11:42:09 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:07.100 11:42:09 -- spdk/autotest.sh@194 -- # uname -s 00:05:07.100 11:42:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:07.100 11:42:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:07.100 11:42:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:07.100 11:42:09 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:07.100 11:42:09 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:07.100 11:42:09 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:07.100 11:42:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:07.100 11:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.100 11:42:09 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:07.100 11:42:09 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:07.100 11:42:09 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:07.100 11:42:09 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:07.100 11:42:09 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:07.100 11:42:09 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:07.100 11:42:09 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:07.100 11:42:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:07.100 11:42:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.100 11:42:09 -- common/autotest_common.sh@10 -- # set +x 00:05:07.100 ************************************ 00:05:07.100 START TEST nvmf_tcp 00:05:07.100 ************************************ 00:05:07.100 11:42:09 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:07.361 * Looking for test storage... 00:05:07.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.361 11:42:09 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.361 --rc genhtml_branch_coverage=1 00:05:07.361 --rc genhtml_function_coverage=1 00:05:07.361 --rc genhtml_legend=1 00:05:07.361 --rc geninfo_all_blocks=1 00:05:07.361 --rc geninfo_unexecuted_blocks=1 00:05:07.361 00:05:07.361 ' 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.361 --rc genhtml_branch_coverage=1 00:05:07.361 --rc genhtml_function_coverage=1 00:05:07.361 --rc genhtml_legend=1 00:05:07.361 --rc geninfo_all_blocks=1 00:05:07.361 --rc geninfo_unexecuted_blocks=1 00:05:07.361 00:05:07.361 ' 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.361 --rc genhtml_branch_coverage=1 00:05:07.361 --rc genhtml_function_coverage=1 00:05:07.361 --rc genhtml_legend=1 00:05:07.361 --rc geninfo_all_blocks=1 00:05:07.361 --rc geninfo_unexecuted_blocks=1 00:05:07.361 00:05:07.361 ' 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.361 --rc genhtml_branch_coverage=1 00:05:07.361 --rc genhtml_function_coverage=1 00:05:07.361 --rc genhtml_legend=1 00:05:07.361 --rc geninfo_all_blocks=1 00:05:07.361 --rc geninfo_unexecuted_blocks=1 00:05:07.361 00:05:07.361 ' 00:05:07.361 11:42:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:07.361 11:42:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:07.361 11:42:09 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.361 11:42:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.361 ************************************ 00:05:07.361 START TEST nvmf_target_core 00:05:07.361 ************************************ 00:05:07.361 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:07.623 * Looking for test storage... 00:05:07.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.623 --rc genhtml_branch_coverage=1 00:05:07.623 --rc genhtml_function_coverage=1 00:05:07.623 --rc genhtml_legend=1 00:05:07.623 --rc geninfo_all_blocks=1 00:05:07.623 --rc geninfo_unexecuted_blocks=1 00:05:07.623 00:05:07.623 ' 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.623 --rc genhtml_branch_coverage=1 00:05:07.623 --rc genhtml_function_coverage=1 00:05:07.623 --rc genhtml_legend=1 00:05:07.623 --rc geninfo_all_blocks=1 00:05:07.623 --rc geninfo_unexecuted_blocks=1 00:05:07.623 00:05:07.623 ' 00:05:07.623 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.623 --rc genhtml_branch_coverage=1 00:05:07.623 --rc genhtml_function_coverage=1 00:05:07.623 --rc genhtml_legend=1 00:05:07.623 --rc geninfo_all_blocks=1 00:05:07.623 --rc geninfo_unexecuted_blocks=1 00:05:07.624 00:05:07.624 ' 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.624 --rc genhtml_branch_coverage=1 00:05:07.624 --rc genhtml_function_coverage=1 00:05:07.624 --rc genhtml_legend=1 00:05:07.624 --rc geninfo_all_blocks=1 00:05:07.624 --rc geninfo_unexecuted_blocks=1 00:05:07.624 00:05:07.624 ' 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:07.624 ************************************ 00:05:07.624 START TEST nvmf_abort 00:05:07.624 ************************************ 00:05:07.624 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:07.886 * Looking for test storage... 00:05:07.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.886 --rc genhtml_branch_coverage=1 00:05:07.886 --rc genhtml_function_coverage=1 00:05:07.886 --rc genhtml_legend=1 00:05:07.886 --rc geninfo_all_blocks=1 00:05:07.886 --rc geninfo_unexecuted_blocks=1 00:05:07.886 00:05:07.886 ' 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.886 --rc genhtml_branch_coverage=1 00:05:07.886 --rc genhtml_function_coverage=1 00:05:07.886 --rc genhtml_legend=1 00:05:07.886 --rc geninfo_all_blocks=1 00:05:07.886 --rc geninfo_unexecuted_blocks=1 00:05:07.886 00:05:07.886 ' 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.886 --rc genhtml_branch_coverage=1 00:05:07.886 --rc genhtml_function_coverage=1 00:05:07.886 --rc genhtml_legend=1 00:05:07.886 --rc geninfo_all_blocks=1 00:05:07.886 --rc geninfo_unexecuted_blocks=1 00:05:07.886 00:05:07.886 ' 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.886 --rc genhtml_branch_coverage=1 00:05:07.886 --rc genhtml_function_coverage=1 00:05:07.886 --rc genhtml_legend=1 00:05:07.886 --rc geninfo_all_blocks=1 00:05:07.886 --rc geninfo_unexecuted_blocks=1 00:05:07.886 00:05:07.886 ' 00:05:07.886 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:07.887 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:16.028 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:16.028 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:16.028 Found net devices under 0000:31:00.0: cvl_0_0 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:16.028 Found net devices under 0000:31:00.1: cvl_0_1 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.028 11:42:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:05:16.028 00:05:16.028 --- 10.0.0.2 ping statistics --- 00:05:16.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.028 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:05:16.028 00:05:16.028 --- 10.0.0.1 ping statistics --- 00:05:16.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.028 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:16.028 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1706316 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1706316 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1706316 ']' 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.029 11:42:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.029 [2024-10-11 11:42:18.310329] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:16.029 [2024-10-11 11:42:18.310394] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.029 [2024-10-11 11:42:18.402429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.029 [2024-10-11 11:42:18.458186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.029 [2024-10-11 11:42:18.458241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.029 [2024-10-11 11:42:18.458250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.029 [2024-10-11 11:42:18.458258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.029 [2024-10-11 11:42:18.458264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.029 [2024-10-11 11:42:18.460389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.029 [2024-10-11 11:42:18.460548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.029 [2024-10-11 11:42:18.460548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.601 [2024-10-11 11:42:19.191245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.601 Malloc0 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.601 Delay0 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.601 [2024-10-11 11:42:19.289417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.601 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.863 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.863 11:42:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:16.863 [2024-10-11 11:42:19.420746] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:19.409 Initializing NVMe Controllers 00:05:19.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:19.409 controller IO queue size 128 less than required 00:05:19.409 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:19.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:19.409 Initialization complete. Launching workers. 00:05:19.409 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28407 00:05:19.409 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28468, failed to submit 62 00:05:19.409 success 28411, unsuccessful 57, failed 0 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:19.409 rmmod nvme_tcp 00:05:19.409 rmmod nvme_fabrics 00:05:19.409 rmmod nvme_keyring 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1706316 ']' 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1706316 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1706316 ']' 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1706316 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1706316 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1706316' 00:05:19.409 killing process with pid 1706316 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1706316 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1706316 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.409 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.321 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:21.321 00:05:21.321 real 0m13.657s 00:05:21.321 user 0m14.406s 00:05:21.321 sys 0m6.713s 00:05:21.321 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.321 11:42:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.321 ************************************ 00:05:21.321 END TEST nvmf_abort 00:05:21.321 ************************************ 00:05:21.321 11:42:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.321 11:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:21.321 11:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.321 11:42:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:21.583 ************************************ 00:05:21.583 START TEST nvmf_ns_hotplug_stress 00:05:21.583 ************************************ 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.583 * Looking for test storage... 00:05:21.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.583 --rc genhtml_branch_coverage=1 00:05:21.583 --rc genhtml_function_coverage=1 00:05:21.583 --rc genhtml_legend=1 00:05:21.583 --rc geninfo_all_blocks=1 00:05:21.583 --rc geninfo_unexecuted_blocks=1 00:05:21.583 00:05:21.583 ' 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.583 --rc genhtml_branch_coverage=1 00:05:21.583 --rc genhtml_function_coverage=1 00:05:21.583 --rc genhtml_legend=1 00:05:21.583 --rc geninfo_all_blocks=1 00:05:21.583 --rc geninfo_unexecuted_blocks=1 00:05:21.583 00:05:21.583 ' 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.583 --rc genhtml_branch_coverage=1 00:05:21.583 --rc genhtml_function_coverage=1 00:05:21.583 --rc genhtml_legend=1 00:05:21.583 --rc geninfo_all_blocks=1 00:05:21.583 --rc geninfo_unexecuted_blocks=1 00:05:21.583 00:05:21.583 ' 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.583 --rc genhtml_branch_coverage=1 00:05:21.583 --rc genhtml_function_coverage=1 00:05:21.583 --rc genhtml_legend=1 00:05:21.583 --rc geninfo_all_blocks=1 00:05:21.583 --rc geninfo_unexecuted_blocks=1 00:05:21.583 00:05:21.583 ' 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.583 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:21.584 11:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:29.731 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:29.731 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:29.731 Found net devices under 0000:31:00.0: cvl_0_0 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:29.731 Found net devices under 0000:31:00.1: cvl_0_1 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:29.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:29.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:05:29.731 00:05:29.731 --- 10.0.0.2 ping statistics --- 00:05:29.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.731 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:05:29.731 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:29.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:29.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:05:29.732 00:05:29.732 --- 10.0.0.1 ping statistics --- 00:05:29.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.732 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:29.732 11:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1711412 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1711412 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1711412 ']' 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.732 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.732 [2024-10-11 11:42:32.073620] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:29.732 [2024-10-11 11:42:32.073690] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:29.732 [2024-10-11 11:42:32.165283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.732 [2024-10-11 11:42:32.218381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:29.732 [2024-10-11 11:42:32.218433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:29.732 [2024-10-11 11:42:32.218442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.732 [2024-10-11 11:42:32.218449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.732 [2024-10-11 11:42:32.218456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:29.732 [2024-10-11 11:42:32.220456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.732 [2024-10-11 11:42:32.220614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.732 [2024-10-11 11:42:32.220615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.305 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.305 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:30.305 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:30.305 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.305 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:30.305 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:30.305 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:30.305 11:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:30.566 [2024-10-11 11:42:33.112006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.566 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:30.828 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:30.828 [2024-10-11 11:42:33.515315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:31.089 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:31.089 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:31.351 Malloc0 00:05:31.351 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:31.611 Delay0 00:05:31.611 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.871 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:31.871 NULL1 00:05:31.871 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:32.132 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:32.132 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1711804 00:05:32.132 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:32.132 11:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.517 Read completed with error (sct=0, sc=11) 00:05:33.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.517 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.517 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.517 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:33.517 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:33.777 true 00:05:33.777 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:33.777 11:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.718 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.718 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:34.718 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:34.978 true 00:05:34.978 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:34.978 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.978 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.238 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:35.238 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:35.498 true 00:05:35.498 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:35.498 11:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.498 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.759 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:35.759 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:36.020 true 00:05:36.020 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:36.020 11:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.962 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.962 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.962 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:36.962 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:37.222 true 00:05:37.222 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:37.222 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.222 11:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.482 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:37.482 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:37.743 true 00:05:37.744 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:37.744 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.744 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.003 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:38.003 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:38.263 true 00:05:38.263 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:38.263 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.263 11:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.523 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:38.523 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:38.784 true 00:05:38.784 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:38.784 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.784 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.065 [2024-10-11 11:42:41.631951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.065 [2024-10-11 11:42:41.632695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.632978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.633874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.634986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.635988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.636017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.066 [2024-10-11 11:42:41.636386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.636985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.637985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.638980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.067 [2024-10-11 11:42:41.639806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.639837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.639867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.639899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.639930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.639962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.639992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.640518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.641987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.642887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.068 [2024-10-11 11:42:41.643880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.643910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.643937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.643965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.643999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.644975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.645994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.069 [2024-10-11 11:42:41.646820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.646847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.646878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.646908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.646937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.646971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.647957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.648993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.649987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.650016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.650045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.650083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.650115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.650145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.650175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.070 [2024-10-11 11:42:41.650203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.650987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.651748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.071 [2024-10-11 11:42:41.652295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.652996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.071 [2024-10-11 11:42:41.653708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.653735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.653765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.653794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.653823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.653852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.653881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.653911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.653943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.654992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.655977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.656985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.072 [2024-10-11 11:42:41.657523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.657990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.658981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.659980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.660999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.661026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.661056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.661089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.661118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.661147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.661177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.661215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.073 [2024-10-11 11:42:41.661245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.661996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.662937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:39.074 [2024-10-11 11:42:41.663502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:39.074 [2024-10-11 11:42:41.663882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.663976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.074 [2024-10-11 11:42:41.664357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.664978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.665800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.666974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.667997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.075 [2024-10-11 11:42:41.668279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.668994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.669979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.670380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.671984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.076 [2024-10-11 11:42:41.672375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.672993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.673969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.674979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.675999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.676034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.676060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.676089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.676113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.676137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.077 [2024-10-11 11:42:41.676161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.676991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.677994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.678977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.078 [2024-10-11 11:42:41.679266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.679768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.680995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.681976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.682968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.683001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.683035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.683074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.683105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.683135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.683181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.683213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.079 [2024-10-11 11:42:41.683245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.683979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.684985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.685971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.686821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.687175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.687207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.080 [2024-10-11 11:42:41.687237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.687964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.688987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.081 [2024-10-11 11:42:41.689757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.689980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.690998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.691028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.691057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.081 [2024-10-11 11:42:41.691092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.691992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.692990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.693971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.694000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.694032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.694071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.694133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.694164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.082 [2024-10-11 11:42:41.694193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.694988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.695987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.696829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.083 [2024-10-11 11:42:41.697867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.697899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.697927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.697955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.697992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.698991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.699980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.700975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.084 [2024-10-11 11:42:41.701472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.701501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.701532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.701561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.701594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.701959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.701992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.702981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.703975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.704970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.085 [2024-10-11 11:42:41.705656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.705978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.706974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.707978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.708977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.709012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.709042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.709077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.709107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.709137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.086 [2024-10-11 11:42:41.709194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.709982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.710979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.711998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.087 [2024-10-11 11:42:41.712548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.712981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.713934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.714981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.715990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.088 [2024-10-11 11:42:41.716407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.716972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.717985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.718988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.719963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.720001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.720029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.720058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.720091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.720124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.720156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.089 [2024-10-11 11:42:41.720188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.720967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.721991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.722979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.090 [2024-10-11 11:42:41.723927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.723959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.723991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.724990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.725996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 Message suppressed 999 times: [2024-10-11 11:42:41.726921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 Read completed with error (sct=0, sc=15) 00:05:39.091 [2024-10-11 11:42:41.726957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.726986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.091 [2024-10-11 11:42:41.727583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.727967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.728841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.729977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.730966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.092 [2024-10-11 11:42:41.731833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.731864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.731897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.731927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.731962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.731992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.732967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.733974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.734996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.735023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.735053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.735085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.735118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.735153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.735181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.735213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.093 [2024-10-11 11:42:41.735242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.735878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.736969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.737999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.738992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.739029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.739061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.739096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.739139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.094 [2024-10-11 11:42:41.739171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.739997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.740988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.741995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.095 [2024-10-11 11:42:41.742387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.742972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.743977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.744997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.745995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.096 [2024-10-11 11:42:41.746258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.746994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.747688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.748991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.749028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.749061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.749098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.749131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.749161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.749191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.749222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.097 [2024-10-11 11:42:41.749250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.749987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.750977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.751946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.390 [2024-10-11 11:42:41.752320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.752961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.752997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.753980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.754982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.755980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.391 [2024-10-11 11:42:41.756679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.756990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.757985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.758989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.759985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.392 [2024-10-11 11:42:41.760694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.760976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.761974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.762974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.393 [2024-10-11 11:42:41.763202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.763967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.764014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.764043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.764075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.764106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.764138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.764168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.393 [2024-10-11 11:42:41.764199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.764873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.765965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.766977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.767974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.768009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.394 [2024-10-11 11:42:41.768042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.768998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.769887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.770978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.771010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.771042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.771076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.771108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.771155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.395 [2024-10-11 11:42:41.771184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.771970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.772989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.773976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.774984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.396 [2024-10-11 11:42:41.775492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.775998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.776994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.777965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.778986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.397 [2024-10-11 11:42:41.779469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.779980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.780970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.781875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.782976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.398 [2024-10-11 11:42:41.783310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.783994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.784988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.785968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.399 [2024-10-11 11:42:41.786949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.786980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.787972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.788987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.789017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.789047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.789079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.789105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.789142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.789185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.789215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.789246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.790991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.400 [2024-10-11 11:42:41.791361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.791993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.792993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.793984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.794994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.795027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.795059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.795092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.795134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.795163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.795192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.401 [2024-10-11 11:42:41.795229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.795994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.796723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.797987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.798976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.799007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.799037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.799072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.799105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.799136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.799168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.799202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.402 [2024-10-11 11:42:41.799231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.799992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.800991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.801971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.403 [2024-10-11 11:42:41.802659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.403 [2024-10-11 11:42:41.802858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.802890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.802918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.802946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.803988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.804984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.805969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.404 [2024-10-11 11:42:41.806754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.806800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.806831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.806863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.806901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.806933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.806962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.807969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.808997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.809998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.405 [2024-10-11 11:42:41.810708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.810994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.811028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.811056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.811099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.811127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.811160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.811193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.811226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.812978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.813996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.814757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.815051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.815087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.815116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.815153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.815181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.815222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.815254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.406 [2024-10-11 11:42:41.815285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.815977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.816999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.817977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.407 [2024-10-11 11:42:41.818861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.818890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.818921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.818958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.818989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.819976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.820972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.821666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.408 [2024-10-11 11:42:41.822750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.822779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.822808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.822843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.822872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.822903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.822927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.822959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.822986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 true 00:05:39.409 [2024-10-11 11:42:41.823528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.823988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.824873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.825999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.409 [2024-10-11 11:42:41.826447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.826996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.827996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.828972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.829977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.830007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.830039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.830079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.830114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.830148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.410 [2024-10-11 11:42:41.830181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.830974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.831972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.832963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.411 [2024-10-11 11:42:41.833877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.833909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.834971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.835954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.836976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.412 [2024-10-11 11:42:41.837541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.837975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.838995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.413 [2024-10-11 11:42:41.839157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.839996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.840977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.413 [2024-10-11 11:42:41.841586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.841975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.842994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.843978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.844968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.414 [2024-10-11 11:42:41.845475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.845969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.846979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.847972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.848978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:39.415 [2024-10-11 11:42:41.849138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 11:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.415 [2024-10-11 11:42:41.849545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.415 [2024-10-11 11:42:41.849722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.849751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.849786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.849816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.849851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.849883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.849915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.849944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.849973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.850966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.851979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.416 [2024-10-11 11:42:41.852794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.852828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.852859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.852887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.852915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.852946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.852975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.853989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.854991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.855905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.856973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.857012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.857040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.417 [2024-10-11 11:42:41.857072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.857971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.858994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.859977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.860996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.861024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.861054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.418 [2024-10-11 11:42:41.861094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.861977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.862988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.863998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.419 [2024-10-11 11:42:41.864802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.864835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.864862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.864891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.864921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.864951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.864985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.865981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.866966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.867995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.420 [2024-10-11 11:42:41.868686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.868970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.869985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.870965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.871980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.872010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.872036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.872072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.872103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.872132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.872165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.872196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.421 [2024-10-11 11:42:41.872225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.872798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.873977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.874977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.875989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.422 [2024-10-11 11:42:41.876436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.876999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.877985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.423 [2024-10-11 11:42:41.878022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.878987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.879990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.880021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.880050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.880084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.880116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.423 [2024-10-11 11:42:41.880146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.880969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.881976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.882886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.424 [2024-10-11 11:42:41.883892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.883917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.883941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.883973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.884997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.885972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.886977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.425 [2024-10-11 11:42:41.887387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.887998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.888979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.889912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.890995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.426 [2024-10-11 11:42:41.891348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.891976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.892998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.893996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.894986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.427 [2024-10-11 11:42:41.895341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.895986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.896943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.897976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.898986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.428 [2024-10-11 11:42:41.899016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.899989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.900972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.901998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.429 [2024-10-11 11:42:41.902713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.902752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.902784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.902814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.902854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.902884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.902915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.902948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.902980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.903987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.904999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.905986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.430 [2024-10-11 11:42:41.906830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.906861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.906892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.906927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.906958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.906992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.907986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.908780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.431 [2024-10-11 11:42:41.909420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.909997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.910975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.911995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.912992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.913023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.913059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.913094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.432 [2024-10-11 11:42:41.913123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.913998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.433 [2024-10-11 11:42:41.914029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.914961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.915894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.433 [2024-10-11 11:42:41.916739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.916768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.916797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.916826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.916861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.916889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.916919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.916951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.916981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.917998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.918987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.919998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.434 [2024-10-11 11:42:41.920451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.920852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.921999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.922973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.923620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.435 [2024-10-11 11:42:41.924657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.924974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.925988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.926889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.927978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.436 [2024-10-11 11:42:41.928357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.928984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.929986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.930713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.437 [2024-10-11 11:42:41.931807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.931839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.931871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.931903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.931932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.931961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.931993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.932979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.933743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.934988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.438 [2024-10-11 11:42:41.935627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.935995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.936979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.937973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.938992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.439 [2024-10-11 11:42:41.939634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.939972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.940845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.941978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.942999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.440 [2024-10-11 11:42:41.943643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.943969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.944992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.945569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.946984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.441 [2024-10-11 11:42:41.947692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.947991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.948983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.949991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.950470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.442 [2024-10-11 11:42:41.951255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.442 [2024-10-11 11:42:41.951671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.951977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.952978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.953940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.954991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.443 [2024-10-11 11:42:41.955437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.955965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.956966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.957979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.958992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.959020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.959058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.959099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.959126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.959161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.959195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.959226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.444 [2024-10-11 11:42:41.959254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.959973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.960918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.961980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.962978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.963008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.445 [2024-10-11 11:42:41.963038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.963974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.964998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.965662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.446 [2024-10-11 11:42:41.966920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.966952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.966981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.967977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.968858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.969993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.447 [2024-10-11 11:42:41.970688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.970966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.971979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.972999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.973986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.448 [2024-10-11 11:42:41.974542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.974985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.975714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.976997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.977989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.978020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.978053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.978089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.978635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.978674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.978706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.978735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.449 [2024-10-11 11:42:41.978769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.978802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.978833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.978862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.978893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.978935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.978968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.978998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.979991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.980989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.981995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.982024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.982056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.982093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.982122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.982156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.982185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.450 [2024-10-11 11:42:41.982213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.982983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.983983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.984983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.451 [2024-10-11 11:42:41.985912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.986992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.987988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.988979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.452 [2024-10-11 11:42:41.989048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.452 [2024-10-11 11:42:41.989795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.989822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.989848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.989872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.989898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.989924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.990982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.991976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.992979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.993971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.994000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.994036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.994072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.994108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.453 [2024-10-11 11:42:41.994136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.994977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.995994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.996996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.454 [2024-10-11 11:42:41.997672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.997991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.998976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:41.999974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.000938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.455 [2024-10-11 11:42:42.001422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.001940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.002988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.003984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.004993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.005025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.005059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.005095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.456 [2024-10-11 11:42:42.005125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 [2024-10-11 11:42:42.005446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.457 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.745 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.746 [2024-10-11 11:42:42.201822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.201862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.201891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.201924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.201952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.201979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.202982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.203991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.204981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.746 [2024-10-11 11:42:42.205364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.205972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.206997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.207978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.747 [2024-10-11 11:42:42.208936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.208965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.208995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.209971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.210975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.211978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.748 [2024-10-11 11:42:42.212692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.212972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.213997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.214992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.749 [2024-10-11 11:42:42.215505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.215983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.749 [2024-10-11 11:42:42.216294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.216972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.217910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.218993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.750 [2024-10-11 11:42:42.219558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.219986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.220980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.221930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.222992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.223027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.223059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.223086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.223112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.223141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.223170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.223209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.751 [2024-10-11 11:42:42.223242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.223987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.224988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.225986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.226988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.227021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.227050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.227077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.227105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.227140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.227178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.227213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.752 [2024-10-11 11:42:42.227252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.227992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.228936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.229981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.753 [2024-10-11 11:42:42.230521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.230932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.231948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:39.754 [2024-10-11 11:42:42.232183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:39.754 [2024-10-11 11:42:42.232588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.232980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.754 [2024-10-11 11:42:42.233994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.234976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.235496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.236978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.755 [2024-10-11 11:42:42.237697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.237727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.237786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.237818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.237848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.237877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.237908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.237940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.237968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.238990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.239971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.240996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.756 [2024-10-11 11:42:42.241842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.241874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.241905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.241936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.241967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.242975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.243974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.244936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.757 [2024-10-11 11:42:42.245666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.245972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.246973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.247977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.248976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.249006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.249037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.249075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.249104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.249136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.758 [2024-10-11 11:42:42.249167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.249972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.250995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.251970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.759 [2024-10-11 11:42:42.252552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.759 [2024-10-11 11:42:42.252992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.253976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.254971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.255985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.760 [2024-10-11 11:42:42.256985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.257977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.258966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.259999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.761 [2024-10-11 11:42:42.260794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.260827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.260857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.260888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.260920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.260949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.260982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.261981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.262999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.762 [2024-10-11 11:42:42.263942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.264985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.265997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.266977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.267994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.763 [2024-10-11 11:42:42.268304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.268995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.269995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.270974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.271794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.764 [2024-10-11 11:42:42.272336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.272980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.273986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.274975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.765 [2024-10-11 11:42:42.275722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.275754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.275789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.275819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.275850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.275882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.275914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.275945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.275977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.276995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.277970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.278752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.279985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.766 [2024-10-11 11:42:42.280033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.280991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.281989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.282996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.283026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.283069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.283097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.767 [2024-10-11 11:42:42.283127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.283824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.284989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.285994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.286995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.287026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.768 [2024-10-11 11:42:42.287055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.287993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.288973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.769 [2024-10-11 11:42:42.289125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.289995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.769 [2024-10-11 11:42:42.290644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.290677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.290738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.290768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.290802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.290832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.290860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.291998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.292989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.293995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.770 [2024-10-11 11:42:42.294678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.294996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.295563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.296976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.297971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.771 [2024-10-11 11:42:42.298495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.298991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.299990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.300998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.301989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.772 [2024-10-11 11:42:42.302337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.302969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.303991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.304989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.305979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.306118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.306148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.306185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.773 [2024-10-11 11:42:42.306214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.306990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.307990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.308836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.774 [2024-10-11 11:42:42.309895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.309925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.309960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.309993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.310975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.311860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.312987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.775 [2024-10-11 11:42:42.313324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.313980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.314982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.315990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.316981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.776 [2024-10-11 11:42:42.317347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.317974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.318923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.319979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.320999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.321030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.321067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.321099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.321131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.321164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.321196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.777 [2024-10-11 11:42:42.321229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.321986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.322979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.323973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.324982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.778 [2024-10-11 11:42:42.325370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.325966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.779 [2024-10-11 11:42:42.326814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.326969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.327999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.779 [2024-10-11 11:42:42.328562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.328974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.329986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.330846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.331994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.780 [2024-10-11 11:42:42.332474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.332989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.333708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.334993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.335981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.336012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.336043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.336082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.336111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.781 [2024-10-11 11:42:42.336141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.336960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.337964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.338998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.339968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.340000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.782 [2024-10-11 11:42:42.340029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.340977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.341980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.342996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.343976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.783 [2024-10-11 11:42:42.344009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.344990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.345974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.346970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.784 [2024-10-11 11:42:42.347330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.347973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.348742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.349994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.350990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.351020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.351052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.351093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.351127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.351160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.351294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.785 [2024-10-11 11:42:42.351325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.351999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.352978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.353974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.354987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.355021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.355054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.355093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.355129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.786 [2024-10-11 11:42:42.355160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.355999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.356994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.787 [2024-10-11 11:42:42.357974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.358984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.359987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.360867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.788 [2024-10-11 11:42:42.361700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.361992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.362980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.363866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.789 [2024-10-11 11:42:42.364167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.364983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.789 [2024-10-11 11:42:42.365466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.365993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.366989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.367983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.368996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.790 [2024-10-11 11:42:42.369582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.369970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.370999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.371999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.372874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.373014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.373047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.373084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.373115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.373147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.373180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.373210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.791 [2024-10-11 11:42:42.373240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.373971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.374972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.375984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.376988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.792 [2024-10-11 11:42:42.377442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.377971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.378990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.379994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.793 [2024-10-11 11:42:42.380893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.380925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.380955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.380989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.381996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.382944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.383747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.794 [2024-10-11 11:42:42.384836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.384864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.384891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.384928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.384960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.385991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.386976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.387645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.795 [2024-10-11 11:42:42.388579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.388981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.389992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.390797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.391983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.796 [2024-10-11 11:42:42.392545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.392982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.393799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.394997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 true 00:05:39.797 [2024-10-11 11:42:42.395473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.797 [2024-10-11 11:42:42.395876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.395906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.395933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.395961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.395989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.396996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.397976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.398971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.798 [2024-10-11 11:42:42.399524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.399980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.400834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:39.799 [2024-10-11 11:42:42.401357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.401990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.402971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.799 [2024-10-11 11:42:42.403474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.403979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.404971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.405988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.406989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.407018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.407044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.407076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.407102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.407126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.407155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.407183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.800 [2024-10-11 11:42:42.407210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.407972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.408978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.409954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.801 [2024-10-11 11:42:42.410575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.410971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.411977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.412629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.413963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.414000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.414035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.414073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.414111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.414137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.414164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.414193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.802 [2024-10-11 11:42:42.414224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.414993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.415981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.416972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.803 [2024-10-11 11:42:42.417869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.417898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.417928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.417956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.417987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.418954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:39.804 [2024-10-11 11:42:42.418993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.804 [2024-10-11 11:42:42.419554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.419674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.420997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.421028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.421056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.421092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:39.804 [2024-10-11 11:42:42.421122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.076 [2024-10-11 11:42:42.421150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.076 [2024-10-11 11:42:42.421182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.421995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.422979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.423900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.077 [2024-10-11 11:42:42.424830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.424855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.424882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.424911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.424945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.424979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.425986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.426990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.427973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.428006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.428038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.428072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.428107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.428139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.428166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.428193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.078 [2024-10-11 11:42:42.428224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.428981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.429993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.430983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.079 [2024-10-11 11:42:42.431850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.431880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.431912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.431939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.431969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.431999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.432986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.433990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.434992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.080 [2024-10-11 11:42:42.435567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.435986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.436960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:40.081 [2024-10-11 11:42:42.437216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.437967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.081 [2024-10-11 11:42:42.438553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.438585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.438617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.438647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.438698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.439986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.440988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.441655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.082 [2024-10-11 11:42:42.442753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.442783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.442814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.442844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.442874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.442912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.442941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.442975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.443982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.444988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.445990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.446021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.446051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.446078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.446113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.446150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.446184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.083 [2024-10-11 11:42:42.446214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.446993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.447991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.448985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.449976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.084 [2024-10-11 11:42:42.450366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.450985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.451996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.452985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.085 [2024-10-11 11:42:42.453635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.453986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.454984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.455983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.456874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.086 [2024-10-11 11:42:42.457265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.457993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.458988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.459979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.087 [2024-10-11 11:42:42.460807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.460842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.460874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.460907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.460941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.460972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.461615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.462998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.463983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.088 [2024-10-11 11:42:42.464867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.464903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.464932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.464962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.464994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.465995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.466974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.467980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.089 [2024-10-11 11:42:42.468329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.468985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.469962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.470975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.090 [2024-10-11 11:42:42.471410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.471729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.472992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.473022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.473054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.473090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.473115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.473148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:40.091 [2024-10-11 11:42:42.473178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.034 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.034 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.034 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:41.034 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:41.294 true 00:05:41.294 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:41.294 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.235 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.235 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:42.235 11:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:42.495 true 00:05:42.495 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:42.495 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.757 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.757 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:42.757 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:43.018 true 00:05:43.018 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:43.018 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.278 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.278 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:43.278 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:43.539 true 00:05:43.539 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:43.539 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.799 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.799 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:43.799 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:44.060 true 00:05:44.060 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:44.060 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.321 11:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.581 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:44.581 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:44.581 true 00:05:44.581 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:44.581 11:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.524 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.524 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:45.524 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:45.785 true 00:05:45.785 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:45.785 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.047 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.307 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:46.307 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:46.307 true 00:05:46.307 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:46.308 11:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.568 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.829 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:46.829 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:46.829 true 00:05:46.829 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:46.829 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.089 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.349 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:47.349 11:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:47.349 true 00:05:47.349 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:47.349 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.609 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.869 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:47.869 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:47.869 true 00:05:47.869 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:47.870 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.130 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.390 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:48.390 11:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:48.390 true 00:05:48.651 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:48.651 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.651 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.911 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:48.911 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:49.172 true 00:05:49.172 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:49.172 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.172 11:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.432 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:49.432 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:49.693 true 00:05:49.693 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:49.693 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.693 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.954 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:49.954 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:50.214 true 00:05:50.214 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:50.214 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.214 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.475 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:50.475 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:50.735 true 00:05:50.735 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:50.735 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.996 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.996 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:50.996 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:51.257 true 00:05:51.257 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:51.257 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.518 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.518 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:51.518 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:51.778 true 00:05:51.779 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:51.779 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.039 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.039 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:52.039 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:52.299 true 00:05:52.299 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:52.299 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.560 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.560 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:52.560 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:52.821 true 00:05:52.821 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:52.821 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.082 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.342 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:53.342 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:53.342 true 00:05:53.342 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:53.342 11:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.284 11:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.284 11:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:54.284 11:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:54.545 true 00:05:54.545 11:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:54.545 11:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.805 11:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.066 11:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:55.066 11:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:55.066 true 00:05:55.066 11:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:55.066 11:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.327 11:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.587 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:55.587 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:55.587 true 00:05:55.587 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:55.587 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.848 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.109 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:56.109 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:56.109 true 00:05:56.109 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:56.109 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.369 11:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.630 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:56.630 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:56.630 true 00:05:56.891 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:56.891 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.891 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.152 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:57.152 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:57.413 true 00:05:57.413 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:57.413 11:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.355 11:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.616 11:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:58.616 11:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:58.877 true 00:05:58.877 11:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:05:58.877 11:43:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.819 11:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.819 11:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:59.819 11:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:00.080 true 00:06:00.080 11:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:06:00.080 11:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.080 11:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.340 11:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:00.340 11:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:00.600 true 00:06:00.600 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:06:00.600 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.861 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.861 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:00.861 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:01.121 true 00:06:01.121 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:06:01.121 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.382 11:43:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.382 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:01.382 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:01.706 true 00:06:01.706 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:06:01.706 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.753 Initializing NVMe Controllers 00:06:02.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:02.753 Controller IO queue size 128, less than required. 00:06:02.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:02.753 Controller IO queue size 128, less than required. 00:06:02.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:02.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:02.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:02.753 Initialization complete. Launching workers. 00:06:02.753 ======================================================== 00:06:02.753 Latency(us) 00:06:02.753 Device Information : IOPS MiB/s Average min max 00:06:02.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2656.40 1.30 20366.02 1352.09 1051749.22 00:06:02.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12305.37 6.01 10401.60 1119.27 399664.41 00:06:02.753 ======================================================== 00:06:02.753 Total : 14961.77 7.31 12170.74 1119.27 1051749.22 00:06:02.753 00:06:02.753 11:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.016 11:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:03.016 11:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:03.016 true 00:06:03.016 11:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1711804 00:06:03.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1711804) - No such process 00:06:03.016 11:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1711804 00:06:03.016 11:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.277 11:43:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.538 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:03.538 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:03.538 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:03.538 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.538 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:03.538 null0 00:06:03.538 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.538 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.538 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:03.800 null1 00:06:03.800 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.800 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.800 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:04.060 null2 00:06:04.060 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.060 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.060 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:04.060 null3 00:06:04.321 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.321 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.321 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:04.321 null4 00:06:04.321 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.321 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.321 11:43:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:04.581 null5 00:06:04.581 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.581 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.582 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:04.842 null6 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:04.842 null7 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:04.842 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1718646 1718647 1718649 1718651 1718654 1718655 1718656 1718658 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.843 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.104 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.104 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.104 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.104 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.104 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.104 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.104 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.104 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.365 11:43:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.365 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.365 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.626 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.887 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.147 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.408 11:43:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.408 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.668 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.668 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.668 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.668 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.668 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.668 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.669 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.929 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.190 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.450 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.450 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.450 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.450 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.450 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.450 11:43:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.450 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.711 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.972 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.232 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.233 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.233 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.233 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.233 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.233 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.495 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.495 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.495 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.495 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.495 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.495 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:08.755 rmmod nvme_tcp 00:06:08.755 rmmod nvme_fabrics 00:06:08.755 rmmod nvme_keyring 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1711412 ']' 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1711412 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1711412 ']' 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1711412 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.755 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1711412 00:06:09.015 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1711412' 00:06:09.016 killing process with pid 1711412 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1711412 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1711412 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.016 11:43:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:11.562 00:06:11.562 real 0m49.650s 00:06:11.562 user 3m17.005s 00:06:11.562 sys 0m16.550s 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:11.562 ************************************ 00:06:11.562 END TEST nvmf_ns_hotplug_stress 00:06:11.562 ************************************ 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:11.562 ************************************ 00:06:11.562 START TEST nvmf_delete_subsystem 00:06:11.562 ************************************ 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:11.562 * Looking for test storage... 00:06:11.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:11.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.562 --rc genhtml_branch_coverage=1 00:06:11.562 --rc genhtml_function_coverage=1 00:06:11.562 --rc genhtml_legend=1 00:06:11.562 --rc geninfo_all_blocks=1 00:06:11.562 --rc geninfo_unexecuted_blocks=1 00:06:11.562 00:06:11.562 ' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:11.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.562 --rc genhtml_branch_coverage=1 00:06:11.562 --rc genhtml_function_coverage=1 00:06:11.562 --rc genhtml_legend=1 00:06:11.562 --rc geninfo_all_blocks=1 00:06:11.562 --rc geninfo_unexecuted_blocks=1 00:06:11.562 00:06:11.562 ' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:11.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.562 --rc genhtml_branch_coverage=1 00:06:11.562 --rc genhtml_function_coverage=1 00:06:11.562 --rc genhtml_legend=1 00:06:11.562 --rc geninfo_all_blocks=1 00:06:11.562 --rc geninfo_unexecuted_blocks=1 00:06:11.562 00:06:11.562 ' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:11.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.562 --rc genhtml_branch_coverage=1 00:06:11.562 --rc genhtml_function_coverage=1 00:06:11.562 --rc genhtml_legend=1 00:06:11.562 --rc geninfo_all_blocks=1 00:06:11.562 --rc geninfo_unexecuted_blocks=1 00:06:11.562 00:06:11.562 ' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.562 11:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.562 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.562 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.562 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:11.562 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.562 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:11.562 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:11.563 11:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:19.706 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:19.706 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:19.706 Found net devices under 0000:31:00.0: cvl_0_0 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:19.706 Found net devices under 0000:31:00.1: cvl_0_1 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:19.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:06:19.706 00:06:19.706 --- 10.0.0.2 ping statistics --- 00:06:19.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.706 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:06:19.706 00:06:19.706 --- 10.0.0.1 ping statistics --- 00:06:19.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.706 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1723898 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1723898 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1723898 ']' 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.706 11:43:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.706 [2024-10-11 11:43:21.739384] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:19.706 [2024-10-11 11:43:21.739448] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.706 [2024-10-11 11:43:21.828518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.706 [2024-10-11 11:43:21.880296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.706 [2024-10-11 11:43:21.880349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.706 [2024-10-11 11:43:21.880358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.706 [2024-10-11 11:43:21.880365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.706 [2024-10-11 11:43:21.880372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.706 [2024-10-11 11:43:21.882035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.706 [2024-10-11 11:43:21.882037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.968 [2024-10-11 11:43:22.612733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.968 [2024-10-11 11:43:22.637077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.968 NULL1 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.968 Delay0 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.968 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.229 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.229 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1724081 00:06:20.229 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:20.229 11:43:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:20.229 [2024-10-11 11:43:22.754178] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:22.141 11:43:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:22.141 11:43:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.141 11:43:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 starting I/O failed: -6 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Write completed with error (sct=0, sc=8) 00:06:22.401 starting I/O failed: -6 00:06:22.401 Write completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 starting I/O failed: -6 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Write completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 starting I/O failed: -6 00:06:22.401 Write completed with error (sct=0, sc=8) 00:06:22.401 Write completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 starting I/O failed: -6 00:06:22.401 Write completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 starting I/O failed: -6 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 starting I/O failed: -6 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.401 starting I/O failed: -6 00:06:22.401 Write completed with error (sct=0, sc=8) 00:06:22.401 Write completed with error (sct=0, sc=8) 00:06:22.401 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 [2024-10-11 11:43:24.879297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133a1d0 is same with the state(6) to be set 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 starting I/O failed: -6 00:06:22.402 [2024-10-11 11:43:24.884370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1940000c00 is same with the state(6) to be set 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 Read completed with error (sct=0, sc=8) 00:06:22.402 Write completed with error (sct=0, sc=8) 00:06:22.402 [2024-10-11 11:43:24.884989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f194000d450 is same with the state(6) to be set 00:06:23.343 [2024-10-11 11:43:25.852803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ae20 is same with the state(6) to be set 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 [2024-10-11 11:43:25.882584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133c390 is same with the state(6) to be set 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 [2024-10-11 11:43:25.882912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133c8a0 is same with the state(6) to be set 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 [2024-10-11 11:43:25.886790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f194000d780 is same with the state(6) to be set 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Write completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 Read completed with error (sct=0, sc=8) 00:06:23.343 [2024-10-11 11:43:25.886850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f194000cfe0 is same with the state(6) to be set 00:06:23.343 Initializing NVMe Controllers 00:06:23.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:23.343 Controller IO queue size 128, less than required. 00:06:23.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:23.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:23.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:23.343 Initialization complete. Launching workers. 00:06:23.344 ======================================================== 00:06:23.344 Latency(us) 00:06:23.344 Device Information : IOPS MiB/s Average min max 00:06:23.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.32 0.08 916491.60 320.96 1006772.61 00:06:23.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.86 0.07 949340.85 638.37 2002214.48 00:06:23.344 ======================================================== 00:06:23.344 Total : 311.17 0.15 932416.92 320.96 2002214.48 00:06:23.344 00:06:23.344 [2024-10-11 11:43:25.887389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ae20 (9): Bad file descriptor 00:06:23.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:23.344 11:43:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.344 11:43:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:23.344 11:43:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1724081 00:06:23.344 11:43:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1724081 00:06:23.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1724081) - No such process 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1724081 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1724081 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1724081 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.914 [2024-10-11 11:43:26.416760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1724922 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1724922 00:06:23.914 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.914 [2024-10-11 11:43:26.505673] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:24.483 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.483 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1724922 00:06:24.483 11:43:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:24.742 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:24.742 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1724922 00:06:24.742 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.311 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.311 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1724922 00:06:25.311 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.881 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.881 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1724922 00:06:25.881 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.451 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.451 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1724922 00:06:26.451 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.051 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.051 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1724922 00:06:27.051 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.311 Initializing NVMe Controllers 00:06:27.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:27.311 Controller IO queue size 128, less than required. 00:06:27.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:27.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:27.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:27.311 Initialization complete. Launching workers. 00:06:27.311 ======================================================== 00:06:27.311 Latency(us) 00:06:27.311 Device Information : IOPS MiB/s Average min max 00:06:27.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002577.67 1000183.92 1007411.51 00:06:27.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003419.69 1000194.66 1041668.11 00:06:27.311 ======================================================== 00:06:27.311 Total : 256.00 0.12 1002998.68 1000183.92 1041668.11 00:06:27.311 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1724922 00:06:27.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1724922) - No such process 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1724922 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:27.311 11:43:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:27.311 rmmod nvme_tcp 00:06:27.311 rmmod nvme_fabrics 00:06:27.311 rmmod nvme_keyring 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1723898 ']' 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1723898 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1723898 ']' 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1723898 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1723898 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1723898' 00:06:27.572 killing process with pid 1723898 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1723898 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1723898 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.572 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.116 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:30.116 00:06:30.116 real 0m18.522s 00:06:30.116 user 0m31.020s 00:06:30.116 sys 0m6.903s 00:06:30.116 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.116 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.116 ************************************ 00:06:30.116 END TEST nvmf_delete_subsystem 00:06:30.117 ************************************ 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.117 ************************************ 00:06:30.117 START TEST nvmf_host_management 00:06:30.117 ************************************ 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:30.117 * Looking for test storage... 00:06:30.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.117 --rc genhtml_branch_coverage=1 00:06:30.117 --rc genhtml_function_coverage=1 00:06:30.117 --rc genhtml_legend=1 00:06:30.117 --rc geninfo_all_blocks=1 00:06:30.117 --rc geninfo_unexecuted_blocks=1 00:06:30.117 00:06:30.117 ' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.117 --rc genhtml_branch_coverage=1 00:06:30.117 --rc genhtml_function_coverage=1 00:06:30.117 --rc genhtml_legend=1 00:06:30.117 --rc geninfo_all_blocks=1 00:06:30.117 --rc geninfo_unexecuted_blocks=1 00:06:30.117 00:06:30.117 ' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.117 --rc genhtml_branch_coverage=1 00:06:30.117 --rc genhtml_function_coverage=1 00:06:30.117 --rc genhtml_legend=1 00:06:30.117 --rc geninfo_all_blocks=1 00:06:30.117 --rc geninfo_unexecuted_blocks=1 00:06:30.117 00:06:30.117 ' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.117 --rc genhtml_branch_coverage=1 00:06:30.117 --rc genhtml_function_coverage=1 00:06:30.117 --rc genhtml_legend=1 00:06:30.117 --rc geninfo_all_blocks=1 00:06:30.117 --rc geninfo_unexecuted_blocks=1 00:06:30.117 00:06:30.117 ' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.117 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.118 11:43:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:38.271 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:38.271 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:38.271 Found net devices under 0000:31:00.0: cvl_0_0 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:38.271 Found net devices under 0000:31:00.1: cvl_0_1 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.271 11:43:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.271 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.271 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.271 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:38.271 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.271 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.271 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.271 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:38.271 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:38.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:06:38.271 00:06:38.271 --- 10.0.0.2 ping statistics --- 00:06:38.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.272 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:06:38.272 00:06:38.272 --- 10.0.0.1 ping statistics --- 00:06:38.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.272 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1730014 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1730014 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1730014 ']' 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.272 11:43:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.272 [2024-10-11 11:43:40.409829] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:38.272 [2024-10-11 11:43:40.409895] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.272 [2024-10-11 11:43:40.491103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.272 [2024-10-11 11:43:40.544981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.272 [2024-10-11 11:43:40.545036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.272 [2024-10-11 11:43:40.545045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.272 [2024-10-11 11:43:40.545052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.272 [2024-10-11 11:43:40.545059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.272 [2024-10-11 11:43:40.547729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.272 [2024-10-11 11:43:40.547891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.272 [2024-10-11 11:43:40.548052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.272 [2024-10-11 11:43:40.548053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.534 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.534 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:38.534 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:38.534 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.534 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 [2024-10-11 11:43:41.278719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 Malloc0 00:06:38.796 [2024-10-11 11:43:41.370996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1730119 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1730119 /var/tmp/bdevperf.sock 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1730119 ']' 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:38.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:38.796 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:38.796 { 00:06:38.796 "params": { 00:06:38.797 "name": "Nvme$subsystem", 00:06:38.797 "trtype": "$TEST_TRANSPORT", 00:06:38.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:38.797 "adrfam": "ipv4", 00:06:38.797 "trsvcid": "$NVMF_PORT", 00:06:38.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:38.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:38.797 "hdgst": ${hdgst:-false}, 00:06:38.797 "ddgst": ${ddgst:-false} 00:06:38.797 }, 00:06:38.797 "method": "bdev_nvme_attach_controller" 00:06:38.797 } 00:06:38.797 EOF 00:06:38.797 )") 00:06:38.797 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:38.797 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:38.797 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:38.797 11:43:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:38.797 "params": { 00:06:38.797 "name": "Nvme0", 00:06:38.797 "trtype": "tcp", 00:06:38.797 "traddr": "10.0.0.2", 00:06:38.797 "adrfam": "ipv4", 00:06:38.797 "trsvcid": "4420", 00:06:38.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:38.797 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:38.797 "hdgst": false, 00:06:38.797 "ddgst": false 00:06:38.797 }, 00:06:38.797 "method": "bdev_nvme_attach_controller" 00:06:38.797 }' 00:06:38.797 [2024-10-11 11:43:41.482400] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:38.797 [2024-10-11 11:43:41.482474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730119 ] 00:06:39.058 [2024-10-11 11:43:41.567737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.058 [2024-10-11 11:43:41.621675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.319 Running I/O for 10 seconds... 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.893 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.893 [2024-10-11 11:43:42.402897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.893 [2024-10-11 11:43:42.402958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.893 [2024-10-11 11:43:42.402979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.893 [2024-10-11 11:43:42.402988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.893 [2024-10-11 11:43:42.402999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.893 [2024-10-11 11:43:42.403007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.893 [2024-10-11 11:43:42.403017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.893 [2024-10-11 11:43:42.403025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.893 [2024-10-11 11:43:42.403044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.893 [2024-10-11 11:43:42.403052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.893 [2024-10-11 11:43:42.403071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.894 [2024-10-11 11:43:42.403795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.894 [2024-10-11 11:43:42.403802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.403988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.403996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.404005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.404013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.404024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.404032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.404042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.404049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.404059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.404078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.404087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.404095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.404104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.895 [2024-10-11 11:43:42.404113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.404152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:39.895 [2024-10-11 11:43:42.404225] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fe36b0 was disconnected and freed. reset controller. 00:06:39.895 [2024-10-11 11:43:42.405451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:39.895 task offset: 91904 on job bdev=Nvme0n1 fails 00:06:39.895 00:06:39.895 Latency(us) 00:06:39.895 [2024-10-11T09:43:42.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.895 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:39.895 Job: Nvme0n1 ended in about 0.61 seconds with error 00:06:39.895 Verification LBA range: start 0x0 length 0x400 00:06:39.895 Nvme0n1 : 0.61 1160.14 72.51 105.47 0.00 49336.70 1740.80 43472.21 00:06:39.895 [2024-10-11T09:43:42.598Z] =================================================================================================================== 00:06:39.895 [2024-10-11T09:43:42.598Z] Total : 1160.14 72.51 105.47 0.00 49336.70 1740.80 43472.21 00:06:39.895 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.895 [2024-10-11 11:43:42.407698] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.895 [2024-10-11 11:43:42.407736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dca540 (9): Bad file descriptor 00:06:39.895 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:39.895 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.895 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.895 [2024-10-11 11:43:42.414279] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:39.895 [2024-10-11 11:43:42.414381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:39.895 [2024-10-11 11:43:42.414407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:39.895 [2024-10-11 11:43:42.414422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:39.895 [2024-10-11 11:43:42.414434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:39.895 [2024-10-11 11:43:42.414442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:39.895 [2024-10-11 11:43:42.414449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dca540 00:06:39.895 [2024-10-11 11:43:42.414470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dca540 (9): Bad file descriptor 00:06:39.895 [2024-10-11 11:43:42.414483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:06:39.895 [2024-10-11 11:43:42.414491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:06:39.895 [2024-10-11 11:43:42.414503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:06:39.895 [2024-10-11 11:43:42.414518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:39.895 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.895 11:43:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1730119 00:06:40.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1730119) - No such process 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:40.841 { 00:06:40.841 "params": { 00:06:40.841 "name": "Nvme$subsystem", 00:06:40.841 "trtype": "$TEST_TRANSPORT", 00:06:40.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:40.841 "adrfam": "ipv4", 00:06:40.841 "trsvcid": "$NVMF_PORT", 00:06:40.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:40.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:40.841 "hdgst": ${hdgst:-false}, 00:06:40.841 "ddgst": ${ddgst:-false} 00:06:40.841 }, 00:06:40.841 "method": "bdev_nvme_attach_controller" 00:06:40.841 } 00:06:40.841 EOF 00:06:40.841 )") 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:40.841 11:43:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:40.841 "params": { 00:06:40.841 "name": "Nvme0", 00:06:40.841 "trtype": "tcp", 00:06:40.841 "traddr": "10.0.0.2", 00:06:40.841 "adrfam": "ipv4", 00:06:40.841 "trsvcid": "4420", 00:06:40.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:40.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:40.841 "hdgst": false, 00:06:40.841 "ddgst": false 00:06:40.841 }, 00:06:40.841 "method": "bdev_nvme_attach_controller" 00:06:40.841 }' 00:06:40.841 [2024-10-11 11:43:43.478215] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:40.841 [2024-10-11 11:43:43.478271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730604 ] 00:06:41.102 [2024-10-11 11:43:43.557826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.102 [2024-10-11 11:43:43.592808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.102 Running I/O for 1 seconds... 00:06:42.488 1756.00 IOPS, 109.75 MiB/s 00:06:42.488 Latency(us) 00:06:42.488 [2024-10-11T09:43:45.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.488 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:42.488 Verification LBA range: start 0x0 length 0x400 00:06:42.488 Nvme0n1 : 1.03 1803.69 112.73 0.00 0.00 34780.90 3263.15 32768.00 00:06:42.488 [2024-10-11T09:43:45.191Z] =================================================================================================================== 00:06:42.488 [2024-10-11T09:43:45.191Z] Total : 1803.69 112.73 0.00 0.00 34780.90 3263.15 32768.00 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:42.488 11:43:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:42.488 rmmod nvme_tcp 00:06:42.488 rmmod nvme_fabrics 00:06:42.488 rmmod nvme_keyring 00:06:42.488 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:42.488 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:42.488 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1730014 ']' 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1730014 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1730014 ']' 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1730014 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1730014 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1730014' 00:06:42.489 killing process with pid 1730014 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1730014 00:06:42.489 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1730014 00:06:42.489 [2024-10-11 11:43:45.174888] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.751 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:44.669 00:06:44.669 real 0m14.911s 00:06:44.669 user 0m23.443s 00:06:44.669 sys 0m6.857s 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:44.669 ************************************ 00:06:44.669 END TEST nvmf_host_management 00:06:44.669 ************************************ 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.669 ************************************ 00:06:44.669 START TEST nvmf_lvol 00:06:44.669 ************************************ 00:06:44.669 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:44.931 * Looking for test storage... 00:06:44.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.931 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.931 --rc genhtml_branch_coverage=1 00:06:44.931 --rc genhtml_function_coverage=1 00:06:44.931 --rc genhtml_legend=1 00:06:44.932 --rc geninfo_all_blocks=1 00:06:44.932 --rc geninfo_unexecuted_blocks=1 00:06:44.932 00:06:44.932 ' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.932 --rc genhtml_branch_coverage=1 00:06:44.932 --rc genhtml_function_coverage=1 00:06:44.932 --rc genhtml_legend=1 00:06:44.932 --rc geninfo_all_blocks=1 00:06:44.932 --rc geninfo_unexecuted_blocks=1 00:06:44.932 00:06:44.932 ' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.932 --rc genhtml_branch_coverage=1 00:06:44.932 --rc genhtml_function_coverage=1 00:06:44.932 --rc genhtml_legend=1 00:06:44.932 --rc geninfo_all_blocks=1 00:06:44.932 --rc geninfo_unexecuted_blocks=1 00:06:44.932 00:06:44.932 ' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.932 --rc genhtml_branch_coverage=1 00:06:44.932 --rc genhtml_function_coverage=1 00:06:44.932 --rc genhtml_legend=1 00:06:44.932 --rc geninfo_all_blocks=1 00:06:44.932 --rc geninfo_unexecuted_blocks=1 00:06:44.932 00:06:44.932 ' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.932 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:53.073 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:53.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:53.073 Found net devices under 0000:31:00.0: cvl_0_0 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:53.073 Found net devices under 0000:31:00.1: cvl_0_1 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.073 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:53.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:06:53.073 00:06:53.073 --- 10.0.0.2 ping statistics --- 00:06:53.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.073 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:06:53.073 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:06:53.073 00:06:53.073 --- 10.0.0.1 ping statistics --- 00:06:53.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.073 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1735210 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1735210 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1735210 ']' 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.074 11:43:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:53.074 [2024-10-11 11:43:55.357906] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:53.074 [2024-10-11 11:43:55.357975] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.074 [2024-10-11 11:43:55.450654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.074 [2024-10-11 11:43:55.503518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.074 [2024-10-11 11:43:55.503567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.074 [2024-10-11 11:43:55.503576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.074 [2024-10-11 11:43:55.503583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.074 [2024-10-11 11:43:55.503589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.074 [2024-10-11 11:43:55.505526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.074 [2024-10-11 11:43:55.505696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.074 [2024-10-11 11:43:55.505697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.715 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.715 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:53.715 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:53.715 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.715 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:53.715 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.715 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:53.715 [2024-10-11 11:43:56.390400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.013 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:54.013 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:54.013 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:54.273 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:54.273 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:54.533 11:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:54.794 11:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8e3395eb-0100-4813-b836-60867ddb8e12 00:06:54.794 11:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e3395eb-0100-4813-b836-60867ddb8e12 lvol 20 00:06:55.055 11:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a4e95c12-9552-401c-ab5f-4d63ba0fd4c7 00:06:55.055 11:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:55.055 11:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4e95c12-9552-401c-ab5f-4d63ba0fd4c7 00:06:55.315 11:43:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:55.576 [2024-10-11 11:43:58.058159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.576 11:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.836 11:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:55.836 11:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1735894 00:06:55.836 11:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:56.776 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a4e95c12-9552-401c-ab5f-4d63ba0fd4c7 MY_SNAPSHOT 00:06:57.037 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=cd7fcf32-2392-4de8-b7e3-c68817952ebd 00:06:57.037 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a4e95c12-9552-401c-ab5f-4d63ba0fd4c7 30 00:06:57.037 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cd7fcf32-2392-4de8-b7e3-c68817952ebd MY_CLONE 00:06:57.296 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2c488fb0-0857-4979-8719-5a19fc976204 00:06:57.296 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2c488fb0-0857-4979-8719-5a19fc976204 00:06:57.866 11:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1735894 00:07:06.001 Initializing NVMe Controllers 00:07:06.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:06.001 Controller IO queue size 128, less than required. 00:07:06.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:06.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:06.001 Initialization complete. Launching workers. 00:07:06.001 ======================================================== 00:07:06.001 Latency(us) 00:07:06.001 Device Information : IOPS MiB/s Average min max 00:07:06.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16319.50 63.75 7844.09 1594.88 52946.83 00:07:06.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17413.80 68.02 7350.71 443.80 39385.22 00:07:06.001 ======================================================== 00:07:06.001 Total : 33733.30 131.77 7589.40 443.80 52946.83 00:07:06.001 00:07:06.001 11:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:06.261 11:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4e95c12-9552-401c-ab5f-4d63ba0fd4c7 00:07:06.521 11:44:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e3395eb-0100-4813-b836-60867ddb8e12 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:06.521 rmmod nvme_tcp 00:07:06.521 rmmod nvme_fabrics 00:07:06.521 rmmod nvme_keyring 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1735210 ']' 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1735210 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1735210 ']' 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1735210 00:07:06.521 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1735210 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1735210' 00:07:06.782 killing process with pid 1735210 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1735210 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1735210 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.782 11:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:09.330 00:07:09.330 real 0m24.141s 00:07:09.330 user 1m5.012s 00:07:09.330 sys 0m8.635s 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:09.330 ************************************ 00:07:09.330 END TEST nvmf_lvol 00:07:09.330 ************************************ 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:09.330 ************************************ 00:07:09.330 START TEST nvmf_lvs_grow 00:07:09.330 ************************************ 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:09.330 * Looking for test storage... 00:07:09.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.330 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.331 --rc genhtml_branch_coverage=1 00:07:09.331 --rc genhtml_function_coverage=1 00:07:09.331 --rc genhtml_legend=1 00:07:09.331 --rc geninfo_all_blocks=1 00:07:09.331 --rc geninfo_unexecuted_blocks=1 00:07:09.331 00:07:09.331 ' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.331 --rc genhtml_branch_coverage=1 00:07:09.331 --rc genhtml_function_coverage=1 00:07:09.331 --rc genhtml_legend=1 00:07:09.331 --rc geninfo_all_blocks=1 00:07:09.331 --rc geninfo_unexecuted_blocks=1 00:07:09.331 00:07:09.331 ' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.331 --rc genhtml_branch_coverage=1 00:07:09.331 --rc genhtml_function_coverage=1 00:07:09.331 --rc genhtml_legend=1 00:07:09.331 --rc geninfo_all_blocks=1 00:07:09.331 --rc geninfo_unexecuted_blocks=1 00:07:09.331 00:07:09.331 ' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.331 --rc genhtml_branch_coverage=1 00:07:09.331 --rc genhtml_function_coverage=1 00:07:09.331 --rc genhtml_legend=1 00:07:09.331 --rc geninfo_all_blocks=1 00:07:09.331 --rc geninfo_unexecuted_blocks=1 00:07:09.331 00:07:09.331 ' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.331 11:44:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:17.471 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:17.471 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:17.471 Found net devices under 0000:31:00.0: cvl_0_0 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:17.471 Found net devices under 0000:31:00.1: cvl_0_1 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:17.471 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:17.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:07:17.472 00:07:17.472 --- 10.0.0.2 ping statistics --- 00:07:17.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.472 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:07:17.472 00:07:17.472 --- 10.0.0.1 ping statistics --- 00:07:17.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.472 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1742436 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1742436 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1742436 ']' 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.472 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.472 [2024-10-11 11:44:19.553971] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:17.472 [2024-10-11 11:44:19.554033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.472 [2024-10-11 11:44:19.643654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.472 [2024-10-11 11:44:19.694970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.472 [2024-10-11 11:44:19.695018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.472 [2024-10-11 11:44:19.695028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.472 [2024-10-11 11:44:19.695035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.472 [2024-10-11 11:44:19.695041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.472 [2024-10-11 11:44:19.695839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.733 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.733 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:17.733 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:17.733 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.733 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.733 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.733 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.993 [2024-10-11 11:44:20.578608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.994 ************************************ 00:07:17.994 START TEST lvs_grow_clean 00:07:17.994 ************************************ 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:17.994 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.254 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:18.254 11:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:18.515 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:18.515 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:18.515 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:18.776 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:18.776 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:18.776 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 lvol 150 00:07:18.776 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=34aafb2b-f964-4980-ac74-dba03ed44a2f 00:07:18.776 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:18.776 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:19.036 [2024-10-11 11:44:21.624546] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:19.036 [2024-10-11 11:44:21.624615] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:19.036 true 00:07:19.036 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:19.036 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:19.297 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:19.297 11:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:19.557 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34aafb2b-f964-4980-ac74-dba03ed44a2f 00:07:19.557 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:19.817 [2024-10-11 11:44:22.362880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.817 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1743048 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1743048 /var/tmp/bdevperf.sock 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1743048 ']' 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.078 11:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:20.078 [2024-10-11 11:44:22.598299] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:20.078 [2024-10-11 11:44:22.598363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743048 ] 00:07:20.078 [2024-10-11 11:44:22.680279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.078 [2024-10-11 11:44:22.732996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.020 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.020 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:21.020 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:21.280 Nvme0n1 00:07:21.280 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:21.280 [ 00:07:21.280 { 00:07:21.280 "name": "Nvme0n1", 00:07:21.280 "aliases": [ 00:07:21.280 "34aafb2b-f964-4980-ac74-dba03ed44a2f" 00:07:21.280 ], 00:07:21.280 "product_name": "NVMe disk", 00:07:21.280 "block_size": 4096, 00:07:21.280 "num_blocks": 38912, 00:07:21.280 "uuid": "34aafb2b-f964-4980-ac74-dba03ed44a2f", 00:07:21.280 "numa_id": 0, 00:07:21.280 "assigned_rate_limits": { 00:07:21.280 "rw_ios_per_sec": 0, 00:07:21.280 "rw_mbytes_per_sec": 0, 00:07:21.280 "r_mbytes_per_sec": 0, 00:07:21.280 "w_mbytes_per_sec": 0 00:07:21.280 }, 00:07:21.280 "claimed": false, 00:07:21.280 "zoned": false, 00:07:21.280 "supported_io_types": { 00:07:21.280 "read": true, 00:07:21.280 "write": true, 00:07:21.280 "unmap": true, 00:07:21.280 "flush": true, 00:07:21.280 "reset": true, 00:07:21.280 "nvme_admin": true, 00:07:21.280 "nvme_io": true, 00:07:21.280 "nvme_io_md": false, 00:07:21.280 "write_zeroes": true, 00:07:21.280 "zcopy": false, 00:07:21.280 "get_zone_info": false, 00:07:21.280 "zone_management": false, 00:07:21.280 "zone_append": false, 00:07:21.280 "compare": true, 00:07:21.280 "compare_and_write": true, 00:07:21.280 "abort": true, 00:07:21.280 "seek_hole": false, 00:07:21.280 "seek_data": false, 00:07:21.280 "copy": true, 00:07:21.280 "nvme_iov_md": false 00:07:21.280 }, 00:07:21.280 "memory_domains": [ 00:07:21.280 { 00:07:21.280 "dma_device_id": "system", 00:07:21.280 "dma_device_type": 1 00:07:21.280 } 00:07:21.280 ], 00:07:21.280 "driver_specific": { 00:07:21.280 "nvme": [ 00:07:21.280 { 00:07:21.280 "trid": { 00:07:21.280 "trtype": "TCP", 00:07:21.280 "adrfam": "IPv4", 00:07:21.280 "traddr": "10.0.0.2", 00:07:21.280 "trsvcid": "4420", 00:07:21.280 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:21.280 }, 00:07:21.280 "ctrlr_data": { 00:07:21.280 "cntlid": 1, 00:07:21.280 "vendor_id": "0x8086", 00:07:21.280 "model_number": "SPDK bdev Controller", 00:07:21.280 "serial_number": "SPDK0", 00:07:21.280 "firmware_revision": "25.01", 00:07:21.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:21.280 "oacs": { 00:07:21.280 "security": 0, 00:07:21.280 "format": 0, 00:07:21.280 "firmware": 0, 00:07:21.280 "ns_manage": 0 00:07:21.280 }, 00:07:21.280 "multi_ctrlr": true, 00:07:21.280 "ana_reporting": false 00:07:21.280 }, 00:07:21.280 "vs": { 00:07:21.280 "nvme_version": "1.3" 00:07:21.280 }, 00:07:21.280 "ns_data": { 00:07:21.280 "id": 1, 00:07:21.280 "can_share": true 00:07:21.280 } 00:07:21.280 } 00:07:21.280 ], 00:07:21.280 "mp_policy": "active_passive" 00:07:21.280 } 00:07:21.280 } 00:07:21.280 ] 00:07:21.540 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1743384 00:07:21.540 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:21.540 11:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:21.540 Running I/O for 10 seconds... 00:07:22.480 Latency(us) 00:07:22.480 [2024-10-11T09:44:25.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.480 Nvme0n1 : 1.00 25172.00 98.33 0.00 0.00 0.00 0.00 0.00 00:07:22.480 [2024-10-11T09:44:25.183Z] =================================================================================================================== 00:07:22.480 [2024-10-11T09:44:25.183Z] Total : 25172.00 98.33 0.00 0.00 0.00 0.00 0.00 00:07:22.480 00:07:23.424 11:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:23.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.424 Nvme0n1 : 2.00 25296.50 98.81 0.00 0.00 0.00 0.00 0.00 00:07:23.424 [2024-10-11T09:44:26.127Z] =================================================================================================================== 00:07:23.424 [2024-10-11T09:44:26.127Z] Total : 25296.50 98.81 0.00 0.00 0.00 0.00 0.00 00:07:23.424 00:07:23.683 true 00:07:23.683 11:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:23.683 11:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:23.683 11:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:23.683 11:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:23.683 11:44:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1743384 00:07:24.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.624 Nvme0n1 : 3.00 25354.67 99.04 0.00 0.00 0.00 0.00 0.00 00:07:24.624 [2024-10-11T09:44:27.327Z] =================================================================================================================== 00:07:24.624 [2024-10-11T09:44:27.327Z] Total : 25354.67 99.04 0.00 0.00 0.00 0.00 0.00 00:07:24.624 00:07:25.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.564 Nvme0n1 : 4.00 25399.00 99.21 0.00 0.00 0.00 0.00 0.00 00:07:25.564 [2024-10-11T09:44:28.267Z] =================================================================================================================== 00:07:25.564 [2024-10-11T09:44:28.267Z] Total : 25399.00 99.21 0.00 0.00 0.00 0.00 0.00 00:07:25.564 00:07:26.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.503 Nvme0n1 : 5.00 25438.60 99.37 0.00 0.00 0.00 0.00 0.00 00:07:26.503 [2024-10-11T09:44:29.206Z] =================================================================================================================== 00:07:26.503 [2024-10-11T09:44:29.206Z] Total : 25438.60 99.37 0.00 0.00 0.00 0.00 0.00 00:07:26.503 00:07:27.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.441 Nvme0n1 : 6.00 25454.50 99.43 0.00 0.00 0.00 0.00 0.00 00:07:27.441 [2024-10-11T09:44:30.144Z] =================================================================================================================== 00:07:27.441 [2024-10-11T09:44:30.145Z] Total : 25454.50 99.43 0.00 0.00 0.00 0.00 0.00 00:07:27.442 00:07:28.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.822 Nvme0n1 : 7.00 25470.86 99.50 0.00 0.00 0.00 0.00 0.00 00:07:28.822 [2024-10-11T09:44:31.525Z] =================================================================================================================== 00:07:28.822 [2024-10-11T09:44:31.525Z] Total : 25470.86 99.50 0.00 0.00 0.00 0.00 0.00 00:07:28.822 00:07:29.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.392 Nvme0n1 : 8.00 25490.88 99.57 0.00 0.00 0.00 0.00 0.00 00:07:29.392 [2024-10-11T09:44:32.095Z] =================================================================================================================== 00:07:29.392 [2024-10-11T09:44:32.095Z] Total : 25490.88 99.57 0.00 0.00 0.00 0.00 0.00 00:07:29.392 00:07:30.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.773 Nvme0n1 : 9.00 25509.89 99.65 0.00 0.00 0.00 0.00 0.00 00:07:30.773 [2024-10-11T09:44:33.476Z] =================================================================================================================== 00:07:30.773 [2024-10-11T09:44:33.476Z] Total : 25509.89 99.65 0.00 0.00 0.00 0.00 0.00 00:07:30.773 00:07:31.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.712 Nvme0n1 : 10.00 25524.70 99.71 0.00 0.00 0.00 0.00 0.00 00:07:31.712 [2024-10-11T09:44:34.416Z] =================================================================================================================== 00:07:31.713 [2024-10-11T09:44:34.416Z] Total : 25524.70 99.71 0.00 0.00 0.00 0.00 0.00 00:07:31.713 00:07:31.713 00:07:31.713 Latency(us) 00:07:31.713 [2024-10-11T09:44:34.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.713 Nvme0n1 : 10.00 25522.17 99.70 0.00 0.00 5011.61 2512.21 9557.33 00:07:31.713 [2024-10-11T09:44:34.416Z] =================================================================================================================== 00:07:31.713 [2024-10-11T09:44:34.416Z] Total : 25522.17 99.70 0.00 0.00 5011.61 2512.21 9557.33 00:07:31.713 { 00:07:31.713 "results": [ 00:07:31.713 { 00:07:31.713 "job": "Nvme0n1", 00:07:31.713 "core_mask": "0x2", 00:07:31.713 "workload": "randwrite", 00:07:31.713 "status": "finished", 00:07:31.713 "queue_depth": 128, 00:07:31.713 "io_size": 4096, 00:07:31.713 "runtime": 10.003538, 00:07:31.713 "iops": 25522.17025616337, 00:07:31.713 "mibps": 99.69597756313816, 00:07:31.713 "io_failed": 0, 00:07:31.713 "io_timeout": 0, 00:07:31.713 "avg_latency_us": 5011.613939128074, 00:07:31.713 "min_latency_us": 2512.213333333333, 00:07:31.713 "max_latency_us": 9557.333333333334 00:07:31.713 } 00:07:31.713 ], 00:07:31.713 "core_count": 1 00:07:31.713 } 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1743048 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1743048 ']' 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1743048 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1743048 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1743048' 00:07:31.713 killing process with pid 1743048 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1743048 00:07:31.713 Received shutdown signal, test time was about 10.000000 seconds 00:07:31.713 00:07:31.713 Latency(us) 00:07:31.713 [2024-10-11T09:44:34.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.713 [2024-10-11T09:44:34.416Z] =================================================================================================================== 00:07:31.713 [2024-10-11T09:44:34.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1743048 00:07:31.713 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.973 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.973 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:31.973 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:32.233 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:32.233 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:32.233 11:44:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:32.494 [2024-10-11 11:44:34.998642] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:32.494 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:32.754 request: 00:07:32.754 { 00:07:32.754 "uuid": "6bce1778-36bb-4200-b3ae-a4573d12b1f5", 00:07:32.754 "method": "bdev_lvol_get_lvstores", 00:07:32.754 "req_id": 1 00:07:32.754 } 00:07:32.754 Got JSON-RPC error response 00:07:32.754 response: 00:07:32.754 { 00:07:32.754 "code": -19, 00:07:32.754 "message": "No such device" 00:07:32.754 } 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:32.754 aio_bdev 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 34aafb2b-f964-4980-ac74-dba03ed44a2f 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=34aafb2b-f964-4980-ac74-dba03ed44a2f 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.754 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:33.014 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 34aafb2b-f964-4980-ac74-dba03ed44a2f -t 2000 00:07:33.014 [ 00:07:33.014 { 00:07:33.014 "name": "34aafb2b-f964-4980-ac74-dba03ed44a2f", 00:07:33.014 "aliases": [ 00:07:33.014 "lvs/lvol" 00:07:33.014 ], 00:07:33.014 "product_name": "Logical Volume", 00:07:33.014 "block_size": 4096, 00:07:33.014 "num_blocks": 38912, 00:07:33.014 "uuid": "34aafb2b-f964-4980-ac74-dba03ed44a2f", 00:07:33.014 "assigned_rate_limits": { 00:07:33.014 "rw_ios_per_sec": 0, 00:07:33.014 "rw_mbytes_per_sec": 0, 00:07:33.014 "r_mbytes_per_sec": 0, 00:07:33.014 "w_mbytes_per_sec": 0 00:07:33.014 }, 00:07:33.014 "claimed": false, 00:07:33.014 "zoned": false, 00:07:33.014 "supported_io_types": { 00:07:33.014 "read": true, 00:07:33.014 "write": true, 00:07:33.014 "unmap": true, 00:07:33.014 "flush": false, 00:07:33.014 "reset": true, 00:07:33.014 "nvme_admin": false, 00:07:33.014 "nvme_io": false, 00:07:33.014 "nvme_io_md": false, 00:07:33.014 "write_zeroes": true, 00:07:33.014 "zcopy": false, 00:07:33.014 "get_zone_info": false, 00:07:33.014 "zone_management": false, 00:07:33.014 "zone_append": false, 00:07:33.014 "compare": false, 00:07:33.014 "compare_and_write": false, 00:07:33.014 "abort": false, 00:07:33.014 "seek_hole": true, 00:07:33.014 "seek_data": true, 00:07:33.014 "copy": false, 00:07:33.014 "nvme_iov_md": false 00:07:33.014 }, 00:07:33.014 "driver_specific": { 00:07:33.014 "lvol": { 00:07:33.014 "lvol_store_uuid": "6bce1778-36bb-4200-b3ae-a4573d12b1f5", 00:07:33.014 "base_bdev": "aio_bdev", 00:07:33.014 "thin_provision": false, 00:07:33.014 "num_allocated_clusters": 38, 00:07:33.014 "snapshot": false, 00:07:33.014 "clone": false, 00:07:33.014 "esnap_clone": false 00:07:33.014 } 00:07:33.014 } 00:07:33.014 } 00:07:33.014 ] 00:07:33.014 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:33.014 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:33.014 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:33.278 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:33.278 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:33.278 11:44:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:33.538 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:33.538 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 34aafb2b-f964-4980-ac74-dba03ed44a2f 00:07:33.538 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6bce1778-36bb-4200-b3ae-a4573d12b1f5 00:07:33.798 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.058 00:07:34.058 real 0m16.001s 00:07:34.058 user 0m15.631s 00:07:34.058 sys 0m1.446s 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:34.058 ************************************ 00:07:34.058 END TEST lvs_grow_clean 00:07:34.058 ************************************ 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.058 ************************************ 00:07:34.058 START TEST lvs_grow_dirty 00:07:34.058 ************************************ 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.058 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:34.319 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:34.319 11:44:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:34.580 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:34.580 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:34.580 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:34.841 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:34.841 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:34.841 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3a582ad2-7c79-452e-b3f6-d16814dad193 lvol 150 00:07:34.841 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ccc6f82a-44ee-453b-ae58-25fea8db9647 00:07:34.841 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.841 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:35.101 [2024-10-11 11:44:37.607260] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:35.101 [2024-10-11 11:44:37.607300] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:35.101 true 00:07:35.101 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:35.101 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:35.101 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:35.101 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.362 11:44:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ccc6f82a-44ee-453b-ae58-25fea8db9647 00:07:35.623 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.623 [2024-10-11 11:44:38.265147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.623 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1746335 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1746335 /var/tmp/bdevperf.sock 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1746335 ']' 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.884 11:44:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.884 [2024-10-11 11:44:38.488471] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:35.884 [2024-10-11 11:44:38.488539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746335 ] 00:07:35.884 [2024-10-11 11:44:38.568309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.144 [2024-10-11 11:44:38.598257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.715 11:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.715 11:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:36.715 11:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:36.975 Nvme0n1 00:07:36.975 11:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:37.235 [ 00:07:37.235 { 00:07:37.235 "name": "Nvme0n1", 00:07:37.235 "aliases": [ 00:07:37.235 "ccc6f82a-44ee-453b-ae58-25fea8db9647" 00:07:37.235 ], 00:07:37.235 "product_name": "NVMe disk", 00:07:37.235 "block_size": 4096, 00:07:37.235 "num_blocks": 38912, 00:07:37.235 "uuid": "ccc6f82a-44ee-453b-ae58-25fea8db9647", 00:07:37.235 "numa_id": 0, 00:07:37.235 "assigned_rate_limits": { 00:07:37.235 "rw_ios_per_sec": 0, 00:07:37.235 "rw_mbytes_per_sec": 0, 00:07:37.235 "r_mbytes_per_sec": 0, 00:07:37.235 "w_mbytes_per_sec": 0 00:07:37.235 }, 00:07:37.235 "claimed": false, 00:07:37.235 "zoned": false, 00:07:37.235 "supported_io_types": { 00:07:37.235 "read": true, 00:07:37.235 "write": true, 00:07:37.235 "unmap": true, 00:07:37.235 "flush": true, 00:07:37.235 "reset": true, 00:07:37.235 "nvme_admin": true, 00:07:37.235 "nvme_io": true, 00:07:37.235 "nvme_io_md": false, 00:07:37.235 "write_zeroes": true, 00:07:37.235 "zcopy": false, 00:07:37.235 "get_zone_info": false, 00:07:37.235 "zone_management": false, 00:07:37.235 "zone_append": false, 00:07:37.235 "compare": true, 00:07:37.235 "compare_and_write": true, 00:07:37.235 "abort": true, 00:07:37.235 "seek_hole": false, 00:07:37.235 "seek_data": false, 00:07:37.235 "copy": true, 00:07:37.235 "nvme_iov_md": false 00:07:37.235 }, 00:07:37.235 "memory_domains": [ 00:07:37.235 { 00:07:37.235 "dma_device_id": "system", 00:07:37.235 "dma_device_type": 1 00:07:37.235 } 00:07:37.235 ], 00:07:37.235 "driver_specific": { 00:07:37.235 "nvme": [ 00:07:37.235 { 00:07:37.235 "trid": { 00:07:37.235 "trtype": "TCP", 00:07:37.235 "adrfam": "IPv4", 00:07:37.235 "traddr": "10.0.0.2", 00:07:37.235 "trsvcid": "4420", 00:07:37.235 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:37.235 }, 00:07:37.235 "ctrlr_data": { 00:07:37.235 "cntlid": 1, 00:07:37.235 "vendor_id": "0x8086", 00:07:37.235 "model_number": "SPDK bdev Controller", 00:07:37.235 "serial_number": "SPDK0", 00:07:37.235 "firmware_revision": "25.01", 00:07:37.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.235 "oacs": { 00:07:37.235 "security": 0, 00:07:37.235 "format": 0, 00:07:37.235 "firmware": 0, 00:07:37.235 "ns_manage": 0 00:07:37.235 }, 00:07:37.235 "multi_ctrlr": true, 00:07:37.235 "ana_reporting": false 00:07:37.235 }, 00:07:37.235 "vs": { 00:07:37.235 "nvme_version": "1.3" 00:07:37.235 }, 00:07:37.235 "ns_data": { 00:07:37.235 "id": 1, 00:07:37.235 "can_share": true 00:07:37.235 } 00:07:37.235 } 00:07:37.235 ], 00:07:37.235 "mp_policy": "active_passive" 00:07:37.235 } 00:07:37.235 } 00:07:37.235 ] 00:07:37.235 11:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1746542 00:07:37.235 11:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:37.235 11:44:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:37.235 Running I/O for 10 seconds... 00:07:38.616 Latency(us) 00:07:38.616 [2024-10-11T09:44:41.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.616 Nvme0n1 : 1.00 24985.00 97.60 0.00 0.00 0.00 0.00 0.00 00:07:38.616 [2024-10-11T09:44:41.319Z] =================================================================================================================== 00:07:38.616 [2024-10-11T09:44:41.319Z] Total : 24985.00 97.60 0.00 0.00 0.00 0.00 0.00 00:07:38.616 00:07:39.186 11:44:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:39.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.446 Nvme0n1 : 2.00 25244.50 98.61 0.00 0.00 0.00 0.00 0.00 00:07:39.446 [2024-10-11T09:44:42.149Z] =================================================================================================================== 00:07:39.446 [2024-10-11T09:44:42.149Z] Total : 25244.50 98.61 0.00 0.00 0.00 0.00 0.00 00:07:39.446 00:07:39.446 true 00:07:39.446 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:39.446 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:39.705 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:39.705 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:39.705 11:44:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1746542 00:07:40.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.275 Nvme0n1 : 3.00 25340.33 98.99 0.00 0.00 0.00 0.00 0.00 00:07:40.275 [2024-10-11T09:44:42.978Z] =================================================================================================================== 00:07:40.275 [2024-10-11T09:44:42.978Z] Total : 25340.33 98.99 0.00 0.00 0.00 0.00 0.00 00:07:40.275 00:07:41.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.657 Nvme0n1 : 4.00 25403.25 99.23 0.00 0.00 0.00 0.00 0.00 00:07:41.657 [2024-10-11T09:44:44.360Z] =================================================================================================================== 00:07:41.657 [2024-10-11T09:44:44.360Z] Total : 25403.25 99.23 0.00 0.00 0.00 0.00 0.00 00:07:41.657 00:07:42.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.597 Nvme0n1 : 5.00 25441.00 99.38 0.00 0.00 0.00 0.00 0.00 00:07:42.597 [2024-10-11T09:44:45.300Z] =================================================================================================================== 00:07:42.597 [2024-10-11T09:44:45.300Z] Total : 25441.00 99.38 0.00 0.00 0.00 0.00 0.00 00:07:42.597 00:07:43.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.537 Nvme0n1 : 6.00 25476.17 99.52 0.00 0.00 0.00 0.00 0.00 00:07:43.537 [2024-10-11T09:44:46.240Z] =================================================================================================================== 00:07:43.537 [2024-10-11T09:44:46.240Z] Total : 25476.17 99.52 0.00 0.00 0.00 0.00 0.00 00:07:43.537 00:07:44.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.476 Nvme0n1 : 7.00 25501.86 99.62 0.00 0.00 0.00 0.00 0.00 00:07:44.476 [2024-10-11T09:44:47.179Z] =================================================================================================================== 00:07:44.476 [2024-10-11T09:44:47.179Z] Total : 25501.86 99.62 0.00 0.00 0.00 0.00 0.00 00:07:44.476 00:07:45.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.416 Nvme0n1 : 8.00 25521.75 99.69 0.00 0.00 0.00 0.00 0.00 00:07:45.416 [2024-10-11T09:44:48.119Z] =================================================================================================================== 00:07:45.416 [2024-10-11T09:44:48.119Z] Total : 25521.75 99.69 0.00 0.00 0.00 0.00 0.00 00:07:45.416 00:07:46.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.354 Nvme0n1 : 9.00 25537.33 99.76 0.00 0.00 0.00 0.00 0.00 00:07:46.354 [2024-10-11T09:44:49.057Z] =================================================================================================================== 00:07:46.354 [2024-10-11T09:44:49.057Z] Total : 25537.33 99.76 0.00 0.00 0.00 0.00 0.00 00:07:46.354 00:07:47.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.297 Nvme0n1 : 10.00 25555.20 99.83 0.00 0.00 0.00 0.00 0.00 00:07:47.297 [2024-10-11T09:44:50.000Z] =================================================================================================================== 00:07:47.297 [2024-10-11T09:44:50.000Z] Total : 25555.20 99.83 0.00 0.00 0.00 0.00 0.00 00:07:47.297 00:07:47.297 00:07:47.297 Latency(us) 00:07:47.297 [2024-10-11T09:44:50.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.297 Nvme0n1 : 10.00 25557.64 99.83 0.00 0.00 5005.17 3112.96 15291.73 00:07:47.297 [2024-10-11T09:44:50.000Z] =================================================================================================================== 00:07:47.297 [2024-10-11T09:44:50.000Z] Total : 25557.64 99.83 0.00 0.00 5005.17 3112.96 15291.73 00:07:47.297 { 00:07:47.297 "results": [ 00:07:47.297 { 00:07:47.297 "job": "Nvme0n1", 00:07:47.297 "core_mask": "0x2", 00:07:47.297 "workload": "randwrite", 00:07:47.297 "status": "finished", 00:07:47.297 "queue_depth": 128, 00:07:47.297 "io_size": 4096, 00:07:47.297 "runtime": 10.004052, 00:07:47.297 "iops": 25557.644042633925, 00:07:47.297 "mibps": 99.83454704153877, 00:07:47.297 "io_failed": 0, 00:07:47.297 "io_timeout": 0, 00:07:47.297 "avg_latency_us": 5005.166538172715, 00:07:47.297 "min_latency_us": 3112.96, 00:07:47.297 "max_latency_us": 15291.733333333334 00:07:47.297 } 00:07:47.297 ], 00:07:47.297 "core_count": 1 00:07:47.297 } 00:07:47.297 11:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1746335 00:07:47.297 11:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1746335 ']' 00:07:47.297 11:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1746335 00:07:47.297 11:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:47.297 11:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.297 11:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1746335 00:07:47.558 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:47.558 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:47.558 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1746335' 00:07:47.558 killing process with pid 1746335 00:07:47.558 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1746335 00:07:47.558 Received shutdown signal, test time was about 10.000000 seconds 00:07:47.558 00:07:47.558 Latency(us) 00:07:47.558 [2024-10-11T09:44:50.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.558 [2024-10-11T09:44:50.261Z] =================================================================================================================== 00:07:47.558 [2024-10-11T09:44:50.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:47.558 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1746335 00:07:47.558 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.820 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.820 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:47.820 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1742436 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1742436 00:07:48.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1742436 Killed "${NVMF_APP[@]}" "$@" 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1748830 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1748830 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1748830 ']' 00:07:48.080 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.081 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.081 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.081 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.081 11:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:48.081 [2024-10-11 11:44:50.780746] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:48.081 [2024-10-11 11:44:50.780800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.386 [2024-10-11 11:44:50.865926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.386 [2024-10-11 11:44:50.895709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.386 [2024-10-11 11:44:50.895734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.386 [2024-10-11 11:44:50.895740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.386 [2024-10-11 11:44:50.895745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.386 [2024-10-11 11:44:50.895749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.386 [2024-10-11 11:44:50.896200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.074 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.074 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:49.074 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:49.074 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.074 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.074 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.074 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.336 [2024-10-11 11:44:51.749726] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:49.336 [2024-10-11 11:44:51.749841] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:49.336 [2024-10-11 11:44:51.749864] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ccc6f82a-44ee-453b-ae58-25fea8db9647 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ccc6f82a-44ee-453b-ae58-25fea8db9647 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:49.336 11:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ccc6f82a-44ee-453b-ae58-25fea8db9647 -t 2000 00:07:49.597 [ 00:07:49.597 { 00:07:49.597 "name": "ccc6f82a-44ee-453b-ae58-25fea8db9647", 00:07:49.597 "aliases": [ 00:07:49.597 "lvs/lvol" 00:07:49.597 ], 00:07:49.597 "product_name": "Logical Volume", 00:07:49.597 "block_size": 4096, 00:07:49.597 "num_blocks": 38912, 00:07:49.597 "uuid": "ccc6f82a-44ee-453b-ae58-25fea8db9647", 00:07:49.597 "assigned_rate_limits": { 00:07:49.597 "rw_ios_per_sec": 0, 00:07:49.597 "rw_mbytes_per_sec": 0, 00:07:49.597 "r_mbytes_per_sec": 0, 00:07:49.597 "w_mbytes_per_sec": 0 00:07:49.597 }, 00:07:49.597 "claimed": false, 00:07:49.597 "zoned": false, 00:07:49.597 "supported_io_types": { 00:07:49.597 "read": true, 00:07:49.597 "write": true, 00:07:49.597 "unmap": true, 00:07:49.597 "flush": false, 00:07:49.597 "reset": true, 00:07:49.597 "nvme_admin": false, 00:07:49.597 "nvme_io": false, 00:07:49.597 "nvme_io_md": false, 00:07:49.597 "write_zeroes": true, 00:07:49.597 "zcopy": false, 00:07:49.597 "get_zone_info": false, 00:07:49.597 "zone_management": false, 00:07:49.597 "zone_append": false, 00:07:49.597 "compare": false, 00:07:49.597 "compare_and_write": false, 00:07:49.597 "abort": false, 00:07:49.597 "seek_hole": true, 00:07:49.597 "seek_data": true, 00:07:49.597 "copy": false, 00:07:49.597 "nvme_iov_md": false 00:07:49.597 }, 00:07:49.597 "driver_specific": { 00:07:49.597 "lvol": { 00:07:49.597 "lvol_store_uuid": "3a582ad2-7c79-452e-b3f6-d16814dad193", 00:07:49.597 "base_bdev": "aio_bdev", 00:07:49.597 "thin_provision": false, 00:07:49.597 "num_allocated_clusters": 38, 00:07:49.597 "snapshot": false, 00:07:49.597 "clone": false, 00:07:49.597 "esnap_clone": false 00:07:49.597 } 00:07:49.597 } 00:07:49.597 } 00:07:49.597 ] 00:07:49.597 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:49.597 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:49.597 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:49.597 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:49.597 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:49.597 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:49.859 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:49.859 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.859 [2024-10-11 11:44:52.558304] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:50.119 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:50.119 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:50.119 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:50.119 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.119 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.119 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.119 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:50.120 request: 00:07:50.120 { 00:07:50.120 "uuid": "3a582ad2-7c79-452e-b3f6-d16814dad193", 00:07:50.120 "method": "bdev_lvol_get_lvstores", 00:07:50.120 "req_id": 1 00:07:50.120 } 00:07:50.120 Got JSON-RPC error response 00:07:50.120 response: 00:07:50.120 { 00:07:50.120 "code": -19, 00:07:50.120 "message": "No such device" 00:07:50.120 } 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.120 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.380 aio_bdev 00:07:50.380 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ccc6f82a-44ee-453b-ae58-25fea8db9647 00:07:50.380 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ccc6f82a-44ee-453b-ae58-25fea8db9647 00:07:50.380 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.380 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:50.380 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.380 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.380 11:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:50.640 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ccc6f82a-44ee-453b-ae58-25fea8db9647 -t 2000 00:07:50.640 [ 00:07:50.640 { 00:07:50.640 "name": "ccc6f82a-44ee-453b-ae58-25fea8db9647", 00:07:50.640 "aliases": [ 00:07:50.640 "lvs/lvol" 00:07:50.640 ], 00:07:50.640 "product_name": "Logical Volume", 00:07:50.640 "block_size": 4096, 00:07:50.640 "num_blocks": 38912, 00:07:50.640 "uuid": "ccc6f82a-44ee-453b-ae58-25fea8db9647", 00:07:50.640 "assigned_rate_limits": { 00:07:50.640 "rw_ios_per_sec": 0, 00:07:50.640 "rw_mbytes_per_sec": 0, 00:07:50.640 "r_mbytes_per_sec": 0, 00:07:50.640 "w_mbytes_per_sec": 0 00:07:50.640 }, 00:07:50.640 "claimed": false, 00:07:50.640 "zoned": false, 00:07:50.640 "supported_io_types": { 00:07:50.640 "read": true, 00:07:50.640 "write": true, 00:07:50.640 "unmap": true, 00:07:50.640 "flush": false, 00:07:50.640 "reset": true, 00:07:50.640 "nvme_admin": false, 00:07:50.640 "nvme_io": false, 00:07:50.640 "nvme_io_md": false, 00:07:50.640 "write_zeroes": true, 00:07:50.640 "zcopy": false, 00:07:50.640 "get_zone_info": false, 00:07:50.640 "zone_management": false, 00:07:50.640 "zone_append": false, 00:07:50.640 "compare": false, 00:07:50.640 "compare_and_write": false, 00:07:50.640 "abort": false, 00:07:50.640 "seek_hole": true, 00:07:50.640 "seek_data": true, 00:07:50.640 "copy": false, 00:07:50.640 "nvme_iov_md": false 00:07:50.640 }, 00:07:50.640 "driver_specific": { 00:07:50.640 "lvol": { 00:07:50.640 "lvol_store_uuid": "3a582ad2-7c79-452e-b3f6-d16814dad193", 00:07:50.640 "base_bdev": "aio_bdev", 00:07:50.640 "thin_provision": false, 00:07:50.640 "num_allocated_clusters": 38, 00:07:50.640 "snapshot": false, 00:07:50.640 "clone": false, 00:07:50.640 "esnap_clone": false 00:07:50.640 } 00:07:50.640 } 00:07:50.640 } 00:07:50.640 ] 00:07:50.640 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:50.640 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:50.640 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:50.901 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:50.901 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:50.901 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:51.162 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:51.162 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ccc6f82a-44ee-453b-ae58-25fea8db9647 00:07:51.162 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a582ad2-7c79-452e-b3f6-d16814dad193 00:07:51.422 11:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.683 00:07:51.683 real 0m17.439s 00:07:51.683 user 0m45.924s 00:07:51.683 sys 0m2.986s 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:51.683 ************************************ 00:07:51.683 END TEST lvs_grow_dirty 00:07:51.683 ************************************ 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:51.683 nvmf_trace.0 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.683 rmmod nvme_tcp 00:07:51.683 rmmod nvme_fabrics 00:07:51.683 rmmod nvme_keyring 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1748830 ']' 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1748830 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1748830 ']' 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1748830 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.683 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1748830 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1748830' 00:07:51.944 killing process with pid 1748830 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1748830 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1748830 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.944 11:44:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.494 00:07:54.494 real 0m45.002s 00:07:54.494 user 1m7.956s 00:07:54.494 sys 0m10.637s 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.494 ************************************ 00:07:54.494 END TEST nvmf_lvs_grow 00:07:54.494 ************************************ 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.494 ************************************ 00:07:54.494 START TEST nvmf_bdev_io_wait 00:07:54.494 ************************************ 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:54.494 * Looking for test storage... 00:07:54.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:54.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.494 --rc genhtml_branch_coverage=1 00:07:54.494 --rc genhtml_function_coverage=1 00:07:54.494 --rc genhtml_legend=1 00:07:54.494 --rc geninfo_all_blocks=1 00:07:54.494 --rc geninfo_unexecuted_blocks=1 00:07:54.494 00:07:54.494 ' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:54.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.494 --rc genhtml_branch_coverage=1 00:07:54.494 --rc genhtml_function_coverage=1 00:07:54.494 --rc genhtml_legend=1 00:07:54.494 --rc geninfo_all_blocks=1 00:07:54.494 --rc geninfo_unexecuted_blocks=1 00:07:54.494 00:07:54.494 ' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:54.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.494 --rc genhtml_branch_coverage=1 00:07:54.494 --rc genhtml_function_coverage=1 00:07:54.494 --rc genhtml_legend=1 00:07:54.494 --rc geninfo_all_blocks=1 00:07:54.494 --rc geninfo_unexecuted_blocks=1 00:07:54.494 00:07:54.494 ' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:54.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.494 --rc genhtml_branch_coverage=1 00:07:54.494 --rc genhtml_function_coverage=1 00:07:54.494 --rc genhtml_legend=1 00:07:54.494 --rc geninfo_all_blocks=1 00:07:54.494 --rc geninfo_unexecuted_blocks=1 00:07:54.494 00:07:54.494 ' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.494 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.495 11:44:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:02.637 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:02.637 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.637 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:02.638 Found net devices under 0000:31:00.0: cvl_0_0 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:02.638 Found net devices under 0000:31:00.1: cvl_0_1 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:08:02.638 00:08:02.638 --- 10.0.0.2 ping statistics --- 00:08:02.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.638 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:08:02.638 00:08:02.638 --- 10.0.0.1 ping statistics --- 00:08:02.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.638 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1754080 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1754080 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1754080 ']' 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.638 11:45:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.638 [2024-10-11 11:45:04.654117] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:02.638 [2024-10-11 11:45:04.654184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.638 [2024-10-11 11:45:04.746083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.638 [2024-10-11 11:45:04.800543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.638 [2024-10-11 11:45:04.800596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.638 [2024-10-11 11:45:04.800605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.638 [2024-10-11 11:45:04.800612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.638 [2024-10-11 11:45:04.800619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.638 [2024-10-11 11:45:04.802930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.638 [2024-10-11 11:45:04.803109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.638 [2024-10-11 11:45:04.803211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.638 [2024-10-11 11:45:04.803214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.901 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.163 [2024-10-11 11:45:05.610727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.163 Malloc0 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.163 [2024-10-11 11:45:05.676576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1754201 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1754204 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:03.163 { 00:08:03.163 "params": { 00:08:03.163 "name": "Nvme$subsystem", 00:08:03.163 "trtype": "$TEST_TRANSPORT", 00:08:03.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.163 "adrfam": "ipv4", 00:08:03.163 "trsvcid": "$NVMF_PORT", 00:08:03.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.163 "hdgst": ${hdgst:-false}, 00:08:03.163 "ddgst": ${ddgst:-false} 00:08:03.163 }, 00:08:03.163 "method": "bdev_nvme_attach_controller" 00:08:03.163 } 00:08:03.163 EOF 00:08:03.163 )") 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1754207 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:03.163 { 00:08:03.163 "params": { 00:08:03.163 "name": "Nvme$subsystem", 00:08:03.163 "trtype": "$TEST_TRANSPORT", 00:08:03.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.163 "adrfam": "ipv4", 00:08:03.163 "trsvcid": "$NVMF_PORT", 00:08:03.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.163 "hdgst": ${hdgst:-false}, 00:08:03.163 "ddgst": ${ddgst:-false} 00:08:03.163 }, 00:08:03.163 "method": "bdev_nvme_attach_controller" 00:08:03.163 } 00:08:03.163 EOF 00:08:03.163 )") 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1754210 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:03.163 { 00:08:03.163 "params": { 00:08:03.163 "name": "Nvme$subsystem", 00:08:03.163 "trtype": "$TEST_TRANSPORT", 00:08:03.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.163 "adrfam": "ipv4", 00:08:03.163 "trsvcid": "$NVMF_PORT", 00:08:03.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.163 "hdgst": ${hdgst:-false}, 00:08:03.163 "ddgst": ${ddgst:-false} 00:08:03.163 }, 00:08:03.163 "method": "bdev_nvme_attach_controller" 00:08:03.163 } 00:08:03.163 EOF 00:08:03.163 )") 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:03.163 { 00:08:03.163 "params": { 00:08:03.163 "name": "Nvme$subsystem", 00:08:03.163 "trtype": "$TEST_TRANSPORT", 00:08:03.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.163 "adrfam": "ipv4", 00:08:03.163 "trsvcid": "$NVMF_PORT", 00:08:03.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.163 "hdgst": ${hdgst:-false}, 00:08:03.163 "ddgst": ${ddgst:-false} 00:08:03.163 }, 00:08:03.163 "method": "bdev_nvme_attach_controller" 00:08:03.163 } 00:08:03.163 EOF 00:08:03.163 )") 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1754201 00:08:03.163 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:03.164 "params": { 00:08:03.164 "name": "Nvme1", 00:08:03.164 "trtype": "tcp", 00:08:03.164 "traddr": "10.0.0.2", 00:08:03.164 "adrfam": "ipv4", 00:08:03.164 "trsvcid": "4420", 00:08:03.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.164 "hdgst": false, 00:08:03.164 "ddgst": false 00:08:03.164 }, 00:08:03.164 "method": "bdev_nvme_attach_controller" 00:08:03.164 }' 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:03.164 "params": { 00:08:03.164 "name": "Nvme1", 00:08:03.164 "trtype": "tcp", 00:08:03.164 "traddr": "10.0.0.2", 00:08:03.164 "adrfam": "ipv4", 00:08:03.164 "trsvcid": "4420", 00:08:03.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.164 "hdgst": false, 00:08:03.164 "ddgst": false 00:08:03.164 }, 00:08:03.164 "method": "bdev_nvme_attach_controller" 00:08:03.164 }' 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:03.164 "params": { 00:08:03.164 "name": "Nvme1", 00:08:03.164 "trtype": "tcp", 00:08:03.164 "traddr": "10.0.0.2", 00:08:03.164 "adrfam": "ipv4", 00:08:03.164 "trsvcid": "4420", 00:08:03.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.164 "hdgst": false, 00:08:03.164 "ddgst": false 00:08:03.164 }, 00:08:03.164 "method": "bdev_nvme_attach_controller" 00:08:03.164 }' 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:03.164 11:45:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:03.164 "params": { 00:08:03.164 "name": "Nvme1", 00:08:03.164 "trtype": "tcp", 00:08:03.164 "traddr": "10.0.0.2", 00:08:03.164 "adrfam": "ipv4", 00:08:03.164 "trsvcid": "4420", 00:08:03.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:03.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:03.164 "hdgst": false, 00:08:03.164 "ddgst": false 00:08:03.164 }, 00:08:03.164 "method": "bdev_nvme_attach_controller" 00:08:03.164 }' 00:08:03.164 [2024-10-11 11:45:05.733923] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:03.164 [2024-10-11 11:45:05.733987] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:03.164 [2024-10-11 11:45:05.739826] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:03.164 [2024-10-11 11:45:05.739884] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:03.164 [2024-10-11 11:45:05.746233] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:03.164 [2024-10-11 11:45:05.746289] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:03.164 [2024-10-11 11:45:05.747680] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:03.164 [2024-10-11 11:45:05.747742] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:03.426 [2024-10-11 11:45:05.892073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.426 [2024-10-11 11:45:05.931835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:03.426 [2024-10-11 11:45:05.955711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.426 [2024-10-11 11:45:05.996655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:03.426 [2024-10-11 11:45:06.021117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.426 [2024-10-11 11:45:06.055432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:03.426 [2024-10-11 11:45:06.086968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.426 [2024-10-11 11:45:06.124445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:03.687 Running I/O for 1 seconds... 00:08:03.687 Running I/O for 1 seconds... 00:08:03.687 Running I/O for 1 seconds... 00:08:03.687 Running I/O for 1 seconds... 00:08:04.627 7194.00 IOPS, 28.10 MiB/s [2024-10-11T09:45:07.330Z] 13335.00 IOPS, 52.09 MiB/s 00:08:04.627 Latency(us) 00:08:04.627 [2024-10-11T09:45:07.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.627 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:04.627 Nvme1n1 : 1.01 13368.16 52.22 0.00 0.00 9538.14 5461.33 19770.03 00:08:04.627 [2024-10-11T09:45:07.330Z] =================================================================================================================== 00:08:04.627 [2024-10-11T09:45:07.330Z] Total : 13368.16 52.22 0.00 0.00 9538.14 5461.33 19770.03 00:08:04.627 00:08:04.627 Latency(us) 00:08:04.627 [2024-10-11T09:45:07.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.627 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:04.627 Nvme1n1 : 1.02 7176.24 28.03 0.00 0.00 17645.63 7700.48 29272.75 00:08:04.627 [2024-10-11T09:45:07.330Z] =================================================================================================================== 00:08:04.627 [2024-10-11T09:45:07.330Z] Total : 7176.24 28.03 0.00 0.00 17645.63 7700.48 29272.75 00:08:04.627 7126.00 IOPS, 27.84 MiB/s [2024-10-11T09:45:07.330Z] 185456.00 IOPS, 724.44 MiB/s 00:08:04.627 Latency(us) 00:08:04.627 [2024-10-11T09:45:07.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.627 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:04.627 Nvme1n1 : 1.00 185087.16 723.00 0.00 0.00 687.73 307.20 1979.73 00:08:04.627 [2024-10-11T09:45:07.330Z] =================================================================================================================== 00:08:04.627 [2024-10-11T09:45:07.330Z] Total : 185087.16 723.00 0.00 0.00 687.73 307.20 1979.73 00:08:04.627 00:08:04.628 Latency(us) 00:08:04.628 [2024-10-11T09:45:07.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.628 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:04.628 Nvme1n1 : 1.01 7237.67 28.27 0.00 0.00 17631.36 4450.99 40195.41 00:08:04.628 [2024-10-11T09:45:07.331Z] =================================================================================================================== 00:08:04.628 [2024-10-11T09:45:07.331Z] Total : 7237.67 28.27 0.00 0.00 17631.36 4450.99 40195.41 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1754204 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1754207 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1754210 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.887 rmmod nvme_tcp 00:08:04.887 rmmod nvme_fabrics 00:08:04.887 rmmod nvme_keyring 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1754080 ']' 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1754080 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1754080 ']' 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1754080 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1754080 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1754080' 00:08:04.887 killing process with pid 1754080 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1754080 00:08:04.887 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1754080 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.147 11:45:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.692 11:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.692 00:08:07.692 real 0m13.143s 00:08:07.692 user 0m19.073s 00:08:07.692 sys 0m7.388s 00:08:07.692 11:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.692 11:45:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:07.692 ************************************ 00:08:07.692 END TEST nvmf_bdev_io_wait 00:08:07.692 ************************************ 00:08:07.692 11:45:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.692 11:45:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.693 11:45:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.693 11:45:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.693 ************************************ 00:08:07.693 START TEST nvmf_queue_depth 00:08:07.693 ************************************ 00:08:07.693 11:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:07.693 * Looking for test storage... 00:08:07.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.693 11:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.693 11:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.693 11:45:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.693 --rc genhtml_branch_coverage=1 00:08:07.693 --rc genhtml_function_coverage=1 00:08:07.693 --rc genhtml_legend=1 00:08:07.693 --rc geninfo_all_blocks=1 00:08:07.693 --rc geninfo_unexecuted_blocks=1 00:08:07.693 00:08:07.693 ' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.693 --rc genhtml_branch_coverage=1 00:08:07.693 --rc genhtml_function_coverage=1 00:08:07.693 --rc genhtml_legend=1 00:08:07.693 --rc geninfo_all_blocks=1 00:08:07.693 --rc geninfo_unexecuted_blocks=1 00:08:07.693 00:08:07.693 ' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.693 --rc genhtml_branch_coverage=1 00:08:07.693 --rc genhtml_function_coverage=1 00:08:07.693 --rc genhtml_legend=1 00:08:07.693 --rc geninfo_all_blocks=1 00:08:07.693 --rc geninfo_unexecuted_blocks=1 00:08:07.693 00:08:07.693 ' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.693 --rc genhtml_branch_coverage=1 00:08:07.693 --rc genhtml_function_coverage=1 00:08:07.693 --rc genhtml_legend=1 00:08:07.693 --rc geninfo_all_blocks=1 00:08:07.693 --rc geninfo_unexecuted_blocks=1 00:08:07.693 00:08:07.693 ' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.693 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.694 11:45:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:15.833 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:15.833 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:15.833 Found net devices under 0000:31:00.0: cvl_0_0 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:15.833 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:15.834 Found net devices under 0000:31:00.1: cvl_0_1 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:08:15.834 00:08:15.834 --- 10.0.0.2 ping statistics --- 00:08:15.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.834 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:08:15.834 00:08:15.834 --- 10.0.0.1 ping statistics --- 00:08:15.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.834 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1759376 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1759376 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1759376 ']' 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.834 11:45:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.834 [2024-10-11 11:45:17.883746] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:15.834 [2024-10-11 11:45:17.883810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.834 [2024-10-11 11:45:17.978820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.834 [2024-10-11 11:45:18.031284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.834 [2024-10-11 11:45:18.031332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.834 [2024-10-11 11:45:18.031341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.834 [2024-10-11 11:45:18.031348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.834 [2024-10-11 11:45:18.031355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.834 [2024-10-11 11:45:18.032202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.094 [2024-10-11 11:45:18.749881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.094 Malloc0 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.094 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.355 [2024-10-11 11:45:18.810954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1759691 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1759691 /var/tmp/bdevperf.sock 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1759691 ']' 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.355 11:45:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.355 [2024-10-11 11:45:18.868658] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:16.355 [2024-10-11 11:45:18.868719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759691 ] 00:08:16.355 [2024-10-11 11:45:18.951327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.355 [2024-10-11 11:45:19.005299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.299 11:45:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.299 11:45:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:17.299 11:45:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:17.299 11:45:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.299 11:45:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.299 NVMe0n1 00:08:17.299 11:45:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.299 11:45:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.299 Running I/O for 10 seconds... 00:08:19.183 8270.00 IOPS, 32.30 MiB/s [2024-10-11T09:45:23.272Z] 10098.00 IOPS, 39.45 MiB/s [2024-10-11T09:45:24.214Z] 10582.67 IOPS, 41.34 MiB/s [2024-10-11T09:45:25.154Z] 10948.75 IOPS, 42.77 MiB/s [2024-10-11T09:45:26.095Z] 11433.40 IOPS, 44.66 MiB/s [2024-10-11T09:45:27.037Z] 11775.00 IOPS, 46.00 MiB/s [2024-10-11T09:45:27.978Z] 11994.57 IOPS, 46.85 MiB/s [2024-10-11T09:45:28.920Z] 12163.50 IOPS, 47.51 MiB/s [2024-10-11T09:45:30.304Z] 12326.78 IOPS, 48.15 MiB/s [2024-10-11T09:45:30.304Z] 12490.00 IOPS, 48.79 MiB/s 00:08:27.601 Latency(us) 00:08:27.601 [2024-10-11T09:45:30.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.601 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:27.601 Verification LBA range: start 0x0 length 0x4000 00:08:27.601 NVMe0n1 : 10.06 12508.72 48.86 0.00 0.00 81596.92 25012.91 72526.51 00:08:27.601 [2024-10-11T09:45:30.304Z] =================================================================================================================== 00:08:27.601 [2024-10-11T09:45:30.304Z] Total : 12508.72 48.86 0.00 0.00 81596.92 25012.91 72526.51 00:08:27.601 { 00:08:27.601 "results": [ 00:08:27.601 { 00:08:27.601 "job": "NVMe0n1", 00:08:27.601 "core_mask": "0x1", 00:08:27.601 "workload": "verify", 00:08:27.601 "status": "finished", 00:08:27.601 "verify_range": { 00:08:27.601 "start": 0, 00:08:27.601 "length": 16384 00:08:27.601 }, 00:08:27.601 "queue_depth": 1024, 00:08:27.601 "io_size": 4096, 00:08:27.601 "runtime": 10.063219, 00:08:27.601 "iops": 12508.721115976905, 00:08:27.601 "mibps": 48.862191859284785, 00:08:27.601 "io_failed": 0, 00:08:27.601 "io_timeout": 0, 00:08:27.601 "avg_latency_us": 81596.92281267047, 00:08:27.601 "min_latency_us": 25012.906666666666, 00:08:27.601 "max_latency_us": 72526.50666666667 00:08:27.601 } 00:08:27.601 ], 00:08:27.601 "core_count": 1 00:08:27.601 } 00:08:27.601 11:45:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1759691 00:08:27.601 11:45:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1759691 ']' 00:08:27.601 11:45:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1759691 00:08:27.601 11:45:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:27.601 11:45:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.601 11:45:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1759691 00:08:27.601 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.601 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.601 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1759691' 00:08:27.601 killing process with pid 1759691 00:08:27.601 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1759691 00:08:27.601 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.601 00:08:27.601 Latency(us) 00:08:27.601 [2024-10-11T09:45:30.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.601 [2024-10-11T09:45:30.304Z] =================================================================================================================== 00:08:27.601 [2024-10-11T09:45:30.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1759691 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.602 rmmod nvme_tcp 00:08:27.602 rmmod nvme_fabrics 00:08:27.602 rmmod nvme_keyring 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1759376 ']' 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1759376 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1759376 ']' 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1759376 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1759376 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1759376' 00:08:27.602 killing process with pid 1759376 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1759376 00:08:27.602 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1759376 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.863 11:45:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.774 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.036 00:08:30.036 real 0m22.596s 00:08:30.036 user 0m25.744s 00:08:30.036 sys 0m7.106s 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.036 ************************************ 00:08:30.036 END TEST nvmf_queue_depth 00:08:30.036 ************************************ 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.036 ************************************ 00:08:30.036 START TEST nvmf_target_multipath 00:08:30.036 ************************************ 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:30.036 * Looking for test storage... 00:08:30.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.036 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.298 --rc genhtml_branch_coverage=1 00:08:30.298 --rc genhtml_function_coverage=1 00:08:30.298 --rc genhtml_legend=1 00:08:30.298 --rc geninfo_all_blocks=1 00:08:30.298 --rc geninfo_unexecuted_blocks=1 00:08:30.298 00:08:30.298 ' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.298 --rc genhtml_branch_coverage=1 00:08:30.298 --rc genhtml_function_coverage=1 00:08:30.298 --rc genhtml_legend=1 00:08:30.298 --rc geninfo_all_blocks=1 00:08:30.298 --rc geninfo_unexecuted_blocks=1 00:08:30.298 00:08:30.298 ' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.298 --rc genhtml_branch_coverage=1 00:08:30.298 --rc genhtml_function_coverage=1 00:08:30.298 --rc genhtml_legend=1 00:08:30.298 --rc geninfo_all_blocks=1 00:08:30.298 --rc geninfo_unexecuted_blocks=1 00:08:30.298 00:08:30.298 ' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.298 --rc genhtml_branch_coverage=1 00:08:30.298 --rc genhtml_function_coverage=1 00:08:30.298 --rc genhtml_legend=1 00:08:30.298 --rc geninfo_all_blocks=1 00:08:30.298 --rc geninfo_unexecuted_blocks=1 00:08:30.298 00:08:30.298 ' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:30.298 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.299 11:45:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:38.445 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.445 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:38.446 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:38.446 Found net devices under 0000:31:00.0: cvl_0_0 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:38.446 Found net devices under 0000:31:00.1: cvl_0_1 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:38.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:08:38.446 00:08:38.446 --- 10.0.0.2 ping statistics --- 00:08:38.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.446 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:38.446 00:08:38.446 --- 10.0.0.1 ping statistics --- 00:08:38.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.446 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:38.446 only one NIC for nvmf test 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.446 rmmod nvme_tcp 00:08:38.446 rmmod nvme_fabrics 00:08:38.446 rmmod nvme_keyring 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.446 11:45:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.363 00:08:40.363 real 0m10.171s 00:08:40.363 user 0m2.226s 00:08:40.363 sys 0m5.872s 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:40.363 ************************************ 00:08:40.363 END TEST nvmf_target_multipath 00:08:40.363 ************************************ 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.363 ************************************ 00:08:40.363 START TEST nvmf_zcopy 00:08:40.363 ************************************ 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:40.363 * Looking for test storage... 00:08:40.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:40.363 11:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:40.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.363 --rc genhtml_branch_coverage=1 00:08:40.363 --rc genhtml_function_coverage=1 00:08:40.363 --rc genhtml_legend=1 00:08:40.363 --rc geninfo_all_blocks=1 00:08:40.363 --rc geninfo_unexecuted_blocks=1 00:08:40.363 00:08:40.363 ' 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:40.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.363 --rc genhtml_branch_coverage=1 00:08:40.363 --rc genhtml_function_coverage=1 00:08:40.363 --rc genhtml_legend=1 00:08:40.363 --rc geninfo_all_blocks=1 00:08:40.363 --rc geninfo_unexecuted_blocks=1 00:08:40.363 00:08:40.363 ' 00:08:40.363 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:40.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.363 --rc genhtml_branch_coverage=1 00:08:40.363 --rc genhtml_function_coverage=1 00:08:40.363 --rc genhtml_legend=1 00:08:40.363 --rc geninfo_all_blocks=1 00:08:40.363 --rc geninfo_unexecuted_blocks=1 00:08:40.363 00:08:40.364 ' 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:40.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.364 --rc genhtml_branch_coverage=1 00:08:40.364 --rc genhtml_function_coverage=1 00:08:40.364 --rc genhtml_legend=1 00:08:40.364 --rc geninfo_all_blocks=1 00:08:40.364 --rc geninfo_unexecuted_blocks=1 00:08:40.364 00:08:40.364 ' 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.364 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.625 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:40.625 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:40.625 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.625 11:45:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:48.771 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:48.771 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:48.771 Found net devices under 0000:31:00.0: cvl_0_0 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:48.771 Found net devices under 0000:31:00.1: cvl_0_1 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.771 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:08:48.772 00:08:48.772 --- 10.0.0.2 ping statistics --- 00:08:48.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.772 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:08:48.772 00:08:48.772 --- 10.0.0.1 ping statistics --- 00:08:48.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.772 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1770600 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1770600 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1770600 ']' 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.772 11:45:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.772 [2024-10-11 11:45:50.870162] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:48.772 [2024-10-11 11:45:50.870223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.772 [2024-10-11 11:45:50.958554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.772 [2024-10-11 11:45:51.009797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.772 [2024-10-11 11:45:51.009845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.772 [2024-10-11 11:45:51.009853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.772 [2024-10-11 11:45:51.009860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.772 [2024-10-11 11:45:51.009868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.772 [2024-10-11 11:45:51.010672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.033 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.033 [2024-10-11 11:45:51.736022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 [2024-10-11 11:45:51.760290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 malloc0 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:49.295 { 00:08:49.295 "params": { 00:08:49.295 "name": "Nvme$subsystem", 00:08:49.295 "trtype": "$TEST_TRANSPORT", 00:08:49.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.295 "adrfam": "ipv4", 00:08:49.295 "trsvcid": "$NVMF_PORT", 00:08:49.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.295 "hdgst": ${hdgst:-false}, 00:08:49.295 "ddgst": ${ddgst:-false} 00:08:49.295 }, 00:08:49.295 "method": "bdev_nvme_attach_controller" 00:08:49.295 } 00:08:49.295 EOF 00:08:49.295 )") 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:49.295 11:45:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:49.295 "params": { 00:08:49.295 "name": "Nvme1", 00:08:49.295 "trtype": "tcp", 00:08:49.295 "traddr": "10.0.0.2", 00:08:49.295 "adrfam": "ipv4", 00:08:49.295 "trsvcid": "4420", 00:08:49.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.295 "hdgst": false, 00:08:49.295 "ddgst": false 00:08:49.295 }, 00:08:49.295 "method": "bdev_nvme_attach_controller" 00:08:49.295 }' 00:08:49.295 [2024-10-11 11:45:51.860298] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:49.295 [2024-10-11 11:45:51.860359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770873 ] 00:08:49.295 [2024-10-11 11:45:51.941730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.295 [2024-10-11 11:45:51.994901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.868 Running I/O for 10 seconds... 00:08:51.832 6483.00 IOPS, 50.65 MiB/s [2024-10-11T09:45:55.479Z] 7406.50 IOPS, 57.86 MiB/s [2024-10-11T09:45:56.421Z] 8201.67 IOPS, 64.08 MiB/s [2024-10-11T09:45:57.804Z] 8600.50 IOPS, 67.19 MiB/s [2024-10-11T09:45:58.744Z] 8843.60 IOPS, 69.09 MiB/s [2024-10-11T09:45:59.685Z] 9003.67 IOPS, 70.34 MiB/s [2024-10-11T09:46:00.625Z] 9118.00 IOPS, 71.23 MiB/s [2024-10-11T09:46:01.567Z] 9195.88 IOPS, 71.84 MiB/s [2024-10-11T09:46:02.508Z] 9260.00 IOPS, 72.34 MiB/s [2024-10-11T09:46:02.508Z] 9312.00 IOPS, 72.75 MiB/s 00:08:59.805 Latency(us) 00:08:59.805 [2024-10-11T09:46:02.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.805 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:59.805 Verification LBA range: start 0x0 length 0x1000 00:08:59.805 Nvme1n1 : 10.01 9315.78 72.78 0.00 0.00 13692.90 2416.64 27525.12 00:08:59.805 [2024-10-11T09:46:02.508Z] =================================================================================================================== 00:08:59.805 [2024-10-11T09:46:02.508Z] Total : 9315.78 72.78 0.00 0.00 13692.90 2416.64 27525.12 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1772898 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:59.805 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:59.805 { 00:08:59.805 "params": { 00:08:59.805 "name": "Nvme$subsystem", 00:08:59.805 "trtype": "$TEST_TRANSPORT", 00:08:59.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.805 "adrfam": "ipv4", 00:08:59.805 "trsvcid": "$NVMF_PORT", 00:08:59.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.805 "hdgst": ${hdgst:-false}, 00:08:59.805 "ddgst": ${ddgst:-false} 00:08:59.805 }, 00:08:59.805 "method": "bdev_nvme_attach_controller" 00:08:59.805 } 00:08:59.805 EOF 00:08:59.805 )") 00:09:00.066 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:00.066 [2024-10-11 11:46:02.510458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.066 [2024-10-11 11:46:02.510484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.066 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:00.066 [2024-10-11 11:46:02.518446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.066 [2024-10-11 11:46:02.518455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.066 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:00.066 11:46:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:00.067 "params": { 00:09:00.067 "name": "Nvme1", 00:09:00.067 "trtype": "tcp", 00:09:00.067 "traddr": "10.0.0.2", 00:09:00.067 "adrfam": "ipv4", 00:09:00.067 "trsvcid": "4420", 00:09:00.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.067 "hdgst": false, 00:09:00.067 "ddgst": false 00:09:00.067 }, 00:09:00.067 "method": "bdev_nvme_attach_controller" 00:09:00.067 }' 00:09:00.067 [2024-10-11 11:46:02.526465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.526472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.534484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.534491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.546516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.546523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.555284] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:00.067 [2024-10-11 11:46:02.555335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772898 ] 00:09:00.067 [2024-10-11 11:46:02.558546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.558553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.570576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.570584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.582605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.582613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.594636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.594644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.606665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.606673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.618695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.618703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.630248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.067 [2024-10-11 11:46:02.630724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.630731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.642754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.642765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.654785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.654795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.659378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.067 [2024-10-11 11:46:02.666815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.666822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.678853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.678867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.690880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.690892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.702909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.702918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.714938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.714945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.726982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.726996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.739006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.739018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.751034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.751048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.067 [2024-10-11 11:46:02.763070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.067 [2024-10-11 11:46:02.763079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.775101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.775111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.787127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.787134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.799159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.799166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.811192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.811201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.823222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.823229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.835255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.835262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.847295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.847303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.859317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.859326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.871351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.871358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.883380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.883387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.895413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.895422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.946529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.946543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.955571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.955580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 Running I/O for 5 seconds... 00:09:00.328 [2024-10-11 11:46:02.971575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.328 [2024-10-11 11:46:02.971591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.328 [2024-10-11 11:46:02.984393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.329 [2024-10-11 11:46:02.984409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.329 [2024-10-11 11:46:02.998112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.329 [2024-10-11 11:46:02.998128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.329 [2024-10-11 11:46:03.011660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.329 [2024-10-11 11:46:03.011675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.329 [2024-10-11 11:46:03.024781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.329 [2024-10-11 11:46:03.024800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.037587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.037602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.050790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.050806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.063037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.063052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.075618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.075633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.088444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.088460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.101075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.101090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.114579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.114594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.127839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.127854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.140875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.140890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.153243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.153259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.166867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.166882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.180236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.180252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.193994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.194009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.206641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.206656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.219671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.219686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.232203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.232217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.244852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.244867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.257116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.257131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.269783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.269801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.590 [2024-10-11 11:46:03.283237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.590 [2024-10-11 11:46:03.283252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.296481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.296496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.309527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.309542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.322705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.322719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.336083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.336098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.349465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.349480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.362166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.362181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.375349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.375364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.388356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.388371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.401963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.851 [2024-10-11 11:46:03.401978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.851 [2024-10-11 11:46:03.415833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.415848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.429248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.429263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.442866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.442881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.456376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.456392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.469832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.469847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.483014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.483029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.495812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.495826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.509373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.509389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.523123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.523138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.536929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.536943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.852 [2024-10-11 11:46:03.549360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.852 [2024-10-11 11:46:03.549375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.562932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.562948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.576031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.576045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.589536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.589551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.602552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.602567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.615909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.615923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.629237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.629252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.642346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.642361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.655355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.655370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.668617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.668632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.682196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.682211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.694745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.694760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.707449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.707464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.720798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.720812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.734581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.734596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.747366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.747381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.760876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.760890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.774088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.774102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.787712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.787727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.800177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.800191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.113 [2024-10-11 11:46:03.813627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.113 [2024-10-11 11:46:03.813642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.827093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.827108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.840311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.840325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.853607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.853622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.866313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.866328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.879439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.879454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.891878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.891893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.904792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.904808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.917898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.917913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.931205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.931220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.943674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.943689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.956527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.956542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 19123.00 IOPS, 149.40 MiB/s [2024-10-11T09:46:04.078Z] [2024-10-11 11:46:03.969746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.969761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.983493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.983507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:03.996115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:03.996129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:04.009558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:04.009579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:04.022744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:04.022759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:04.035295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:04.035309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:04.048408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:04.048422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:04.062074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:04.062089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.375 [2024-10-11 11:46:04.075312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.375 [2024-10-11 11:46:04.075326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.088544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.088559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.101348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.101362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.114628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.114642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.127930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.127944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.140963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.140978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.154421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.154436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.167931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.167946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.180856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.180871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.194616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.194631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.207032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.207047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.220286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.220300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.233006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.233021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.245221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.245236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.258963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.258982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.272524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.272540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.285158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.285174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.298358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.298373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.311772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.311788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.325498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.325513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.636 [2024-10-11 11:46:04.337840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.636 [2024-10-11 11:46:04.337854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.898 [2024-10-11 11:46:04.351029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.898 [2024-10-11 11:46:04.351044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.898 [2024-10-11 11:46:04.364215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.898 [2024-10-11 11:46:04.364232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.898 [2024-10-11 11:46:04.376889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.898 [2024-10-11 11:46:04.376904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.898 [2024-10-11 11:46:04.390185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.898 [2024-10-11 11:46:04.390200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.898 [2024-10-11 11:46:04.403739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.898 [2024-10-11 11:46:04.403753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.898 [2024-10-11 11:46:04.416394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.898 [2024-10-11 11:46:04.416409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.898 [2024-10-11 11:46:04.430048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.898 [2024-10-11 11:46:04.430067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.898 [2024-10-11 11:46:04.443428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.443444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.456834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.456849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.470240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.470255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.482696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.482712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.495325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.495340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.507732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.507753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.521121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.521136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.534173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.534188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.547751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.547765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.560369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.560384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.573601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.573616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.586486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.586502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.899 [2024-10-11 11:46:04.599717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.899 [2024-10-11 11:46:04.599732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.613339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.613355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.625888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.625903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.639094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.639110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.652151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.652166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.664540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.664555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.678048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.678067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.690876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.690891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.703576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.703591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.717172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.717188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.730635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.730650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.743281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.743296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.756586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.756606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.769764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.769779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.782433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.782448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.796093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.796108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.809511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.809526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.822608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.822623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.836031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.836046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.849310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.849325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.161 [2024-10-11 11:46:04.862663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.161 [2024-10-11 11:46:04.862678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.876255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.876271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.889756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.889772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.902590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.902605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.916162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.916177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.928929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.928944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.942315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.942330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.955253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.955268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 19191.50 IOPS, 149.93 MiB/s [2024-10-11T09:46:05.125Z] [2024-10-11 11:46:04.968577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.968592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.982426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.982441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:04.995073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:04.995088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.007606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.007621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.020379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.020394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.034116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.034131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.047796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.047811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.061265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.061280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.074219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.074235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.087622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.087637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.100323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.100337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.422 [2024-10-11 11:46:05.113143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.422 [2024-10-11 11:46:05.113158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.126585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.126601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.139793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.139808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.152503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.152518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.165915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.165929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.179343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.179358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.192659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.192674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.205308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.205323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.219116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.219131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.231952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.231967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.245360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.245375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.258789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.258804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.272163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.272178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.285083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.285098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.298450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.298465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.311954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.311969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.325643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.325657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.339202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.339217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.352644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.352659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.366017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.366032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.684 [2024-10-11 11:46:05.378679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.684 [2024-10-11 11:46:05.378694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.392088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.392103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.405676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.405690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.418357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.418372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.431327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.431341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.443959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.443973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.456457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.456472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.469484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.469498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.482848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.482863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.496280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.496294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.509391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.509406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.522486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.522501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.535692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.535707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.548201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.548216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.561485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.561500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.945 [2024-10-11 11:46:05.574632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.945 [2024-10-11 11:46:05.574647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.946 [2024-10-11 11:46:05.587280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.946 [2024-10-11 11:46:05.587294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.946 [2024-10-11 11:46:05.599849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.946 [2024-10-11 11:46:05.599863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.946 [2024-10-11 11:46:05.612862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.946 [2024-10-11 11:46:05.612877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.946 [2024-10-11 11:46:05.625521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.946 [2024-10-11 11:46:05.625535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.946 [2024-10-11 11:46:05.638870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.946 [2024-10-11 11:46:05.638884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.652057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.652077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.665420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.665435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.678065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.678080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.691052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.691071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.704909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.704924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.718009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.718024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.731057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.731075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.744481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.744499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.757759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.757774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.771203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.771217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.784885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.784900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.797841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.797855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.811324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.811339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.825028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.825043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.838340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.838355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.851366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.851380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.864487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.864502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.877253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.877267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.890631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.890646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.207 [2024-10-11 11:46:05.903978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.207 [2024-10-11 11:46:05.903992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:05.916558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:05.916573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:05.929941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:05.929956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:05.943305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:05.943320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:05.956219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:05.956233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 19230.33 IOPS, 150.24 MiB/s [2024-10-11T09:46:06.171Z] [2024-10-11 11:46:05.969170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:05.969185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:05.981958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:05.981973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:05.995457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:05.995476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.008688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.008704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.022261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.022276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.034818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.034834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.047364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.047380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.059645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.059660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.072948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.072963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.086058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.086080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.099466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.099480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.112577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.112592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.125608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.125623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.138679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.138694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.151795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.151810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.468 [2024-10-11 11:46:06.164048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.468 [2024-10-11 11:46:06.164068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.177048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.177068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.190596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.190611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.203376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.203391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.216637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.216652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.230263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.230278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.242778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.242797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.255249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.255263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.267979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.267995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.280747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.280761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.294135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.294150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.307407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.307421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.321135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.321150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.333819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.333834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.347354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.347368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.360445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.360459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.373937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.373952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.387300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.387316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.400650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.400665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.413493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.413508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.729 [2024-10-11 11:46:06.426652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.729 [2024-10-11 11:46:06.426667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.440176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.440191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.452482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.452496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.465387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.465402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.478444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.478459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.491714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.491729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.505171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.505186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.518546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.518562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.531176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.531191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.544133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.544148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.557258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.557273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.570482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.570497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.583997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.584012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.597182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.597197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.610461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.610476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.622830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.622844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.635634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.635649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.648117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.648132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.661472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.661487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.674891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.674906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.990 [2024-10-11 11:46:06.687973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.990 [2024-10-11 11:46:06.687988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.701477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.701491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.715058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.715078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.728517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.728532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.740990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.741004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.754013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.754028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.767051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.767070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.780289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.780304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.792855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.792869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.806031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.806045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.819248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.819263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.832552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.832566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.845490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.845505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.858199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.858214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.871053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.871072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.884746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.884761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.897556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.897570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.910839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.910853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.924473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.924487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.936751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.936766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.251 [2024-10-11 11:46:06.949940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.251 [2024-10-11 11:46:06.949955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:06.963674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:06.963689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 19252.00 IOPS, 150.41 MiB/s [2024-10-11T09:46:07.215Z] [2024-10-11 11:46:06.976996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:06.977010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:06.989919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:06.989933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.002552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.002567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.016084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.016098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.029116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.029131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.042437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.042452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.055837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.055852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.068409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.068424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.081338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.081353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.094018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.094033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.106159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.106173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.119669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.119684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.133280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.133295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.146866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.146881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.159546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.159561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.173043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.173058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.186820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.186835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.199682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.199697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.512 [2024-10-11 11:46:07.212922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.512 [2024-10-11 11:46:07.212936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.772 [2024-10-11 11:46:07.226412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.772 [2024-10-11 11:46:07.226431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.772 [2024-10-11 11:46:07.239534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.772 [2024-10-11 11:46:07.239548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.252299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.252313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.265277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.265292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.277604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.277618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.290556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.290570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.304012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.304027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.317428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.317443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.330334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.330348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.343154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.343169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.356804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.356819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.370147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.370162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.383033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.383048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.396139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.396154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.408559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.408573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.421774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.421789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.435290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.435304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.448898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.448912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.462275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.462289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.773 [2024-10-11 11:46:07.475329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.773 [2024-10-11 11:46:07.475347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.488447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.488462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.501696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.501712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.514431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.514445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.527592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.527606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.540500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.540514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.553027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.553042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.566423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.566438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.579883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.579899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.592465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.592480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.604937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.604952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.618589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.618603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.632136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.632151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.645224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.645239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.658116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.658130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.670941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.670955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.684211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.684227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.697640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.697655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.033 [2024-10-11 11:46:07.710388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.033 [2024-10-11 11:46:07.710402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.034 [2024-10-11 11:46:07.723717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.034 [2024-10-11 11:46:07.723740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.034 [2024-10-11 11:46:07.736658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.034 [2024-10-11 11:46:07.736673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.749776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.749791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.763245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.763260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.776455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.776470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.789640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.789655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.802906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.802920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.816297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.816312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.829691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.829706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.842549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.842564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.854601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.854616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.867752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.867766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.880932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.880947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.894425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.894440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.907755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.907770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.921361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.921376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.934184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.934199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.946754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.946769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 [2024-10-11 11:46:07.959732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.959747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 19271.40 IOPS, 150.56 MiB/s [2024-10-11T09:46:07.997Z] [2024-10-11 11:46:07.972189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.294 [2024-10-11 11:46:07.972205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.294 00:09:05.295 Latency(us) 00:09:05.295 [2024-10-11T09:46:07.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.295 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:05.295 Nvme1n1 : 5.01 19276.28 150.60 0.00 0.00 6634.65 2812.59 18240.85 00:09:05.295 [2024-10-11T09:46:07.998Z] =================================================================================================================== 00:09:05.295 [2024-10-11T09:46:07.998Z] Total : 19276.28 150.60 0.00 0.00 6634.65 2812.59 18240.85 00:09:05.295 [2024-10-11 11:46:07.981276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-10-11 11:46:07.981290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.295 [2024-10-11 11:46:07.993306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.295 [2024-10-11 11:46:07.993318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.555 [2024-10-11 11:46:08.005337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.555 [2024-10-11 11:46:08.005349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.555 [2024-10-11 11:46:08.017368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.555 [2024-10-11 11:46:08.017380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.555 [2024-10-11 11:46:08.029397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.555 [2024-10-11 11:46:08.029408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.555 [2024-10-11 11:46:08.041423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.555 [2024-10-11 11:46:08.041433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.555 [2024-10-11 11:46:08.053456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.555 [2024-10-11 11:46:08.053465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.555 [2024-10-11 11:46:08.065486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.555 [2024-10-11 11:46:08.065497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.555 [2024-10-11 11:46:08.077515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.555 [2024-10-11 11:46:08.077525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1772898) - No such process 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1772898 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.555 delay0 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.555 11:46:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:05.816 [2024-10-11 11:46:08.279279] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:12.397 Initializing NVMe Controllers 00:09:12.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:12.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:12.397 Initialization complete. Launching workers. 00:09:12.397 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 262, failed: 21230 00:09:12.397 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21397, failed to submit 95 00:09:12.397 success 21312, unsuccessful 85, failed 0 00:09:12.397 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:12.397 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:12.397 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:12.397 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:12.397 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.397 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:12.397 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.397 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.397 rmmod nvme_tcp 00:09:12.397 rmmod nvme_fabrics 00:09:12.397 rmmod nvme_keyring 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1770600 ']' 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1770600 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1770600 ']' 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1770600 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1770600 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1770600' 00:09:12.398 killing process with pid 1770600 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1770600 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1770600 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.398 11:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.311 11:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.311 00:09:14.311 real 0m34.182s 00:09:14.311 user 0m44.738s 00:09:14.311 sys 0m11.688s 00:09:14.311 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.311 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.311 ************************************ 00:09:14.311 END TEST nvmf_zcopy 00:09:14.311 ************************************ 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.571 ************************************ 00:09:14.571 START TEST nvmf_nmic 00:09:14.571 ************************************ 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:14.571 * Looking for test storage... 00:09:14.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.571 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:14.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.832 --rc genhtml_branch_coverage=1 00:09:14.832 --rc genhtml_function_coverage=1 00:09:14.832 --rc genhtml_legend=1 00:09:14.832 --rc geninfo_all_blocks=1 00:09:14.832 --rc geninfo_unexecuted_blocks=1 00:09:14.832 00:09:14.832 ' 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:14.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.832 --rc genhtml_branch_coverage=1 00:09:14.832 --rc genhtml_function_coverage=1 00:09:14.832 --rc genhtml_legend=1 00:09:14.832 --rc geninfo_all_blocks=1 00:09:14.832 --rc geninfo_unexecuted_blocks=1 00:09:14.832 00:09:14.832 ' 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:14.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.832 --rc genhtml_branch_coverage=1 00:09:14.832 --rc genhtml_function_coverage=1 00:09:14.832 --rc genhtml_legend=1 00:09:14.832 --rc geninfo_all_blocks=1 00:09:14.832 --rc geninfo_unexecuted_blocks=1 00:09:14.832 00:09:14.832 ' 00:09:14.832 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:14.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.832 --rc genhtml_branch_coverage=1 00:09:14.832 --rc genhtml_function_coverage=1 00:09:14.833 --rc genhtml_legend=1 00:09:14.833 --rc geninfo_all_blocks=1 00:09:14.833 --rc geninfo_unexecuted_blocks=1 00:09:14.833 00:09:14.833 ' 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.833 11:46:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:22.975 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:22.975 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:22.975 Found net devices under 0000:31:00.0: cvl_0_0 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:22.975 Found net devices under 0000:31:00.1: cvl_0_1 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.975 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:09:22.976 00:09:22.976 --- 10.0.0.2 ping statistics --- 00:09:22.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.976 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:09:22.976 00:09:22.976 --- 10.0.0.1 ping statistics --- 00:09:22.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.976 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:22.976 11:46:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1779659 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1779659 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1779659 ']' 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.976 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.976 [2024-10-11 11:46:25.090717] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:22.976 [2024-10-11 11:46:25.090781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.976 [2024-10-11 11:46:25.179652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.976 [2024-10-11 11:46:25.235007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.976 [2024-10-11 11:46:25.235056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.976 [2024-10-11 11:46:25.235077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.976 [2024-10-11 11:46:25.235084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.976 [2024-10-11 11:46:25.235091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.976 [2024-10-11 11:46:25.237200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.976 [2024-10-11 11:46:25.237363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.976 [2024-10-11 11:46:25.237523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.976 [2024-10-11 11:46:25.237523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.237 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.237 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:23.237 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:23.237 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.237 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.498 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.498 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.498 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.498 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.498 [2024-10-11 11:46:25.974037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.498 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.498 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.498 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.498 11:46:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.498 Malloc0 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.498 [2024-10-11 11:46:26.050132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.498 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:23.499 test case1: single bdev can't be used in multiple subsystems 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.499 [2024-10-11 11:46:26.085990] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:23.499 [2024-10-11 11:46:26.086017] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:23.499 [2024-10-11 11:46:26.086027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.499 request: 00:09:23.499 { 00:09:23.499 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:23.499 "namespace": { 00:09:23.499 "bdev_name": "Malloc0", 00:09:23.499 "no_auto_visible": false 00:09:23.499 }, 00:09:23.499 "method": "nvmf_subsystem_add_ns", 00:09:23.499 "req_id": 1 00:09:23.499 } 00:09:23.499 Got JSON-RPC error response 00:09:23.499 response: 00:09:23.499 { 00:09:23.499 "code": -32602, 00:09:23.499 "message": "Invalid parameters" 00:09:23.499 } 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:23.499 Adding namespace failed - expected result. 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:23.499 test case2: host connect to nvmf target in multiple paths 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.499 [2024-10-11 11:46:26.098217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.499 11:46:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.414 11:46:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:26.799 11:46:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.799 11:46:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.799 11:46:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.799 11:46:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:26.799 11:46:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:28.713 11:46:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:28.713 11:46:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:28.713 11:46:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.713 11:46:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:28.713 11:46:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.713 11:46:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:28.713 11:46:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:28.713 [global] 00:09:28.713 thread=1 00:09:28.713 invalidate=1 00:09:28.713 rw=write 00:09:28.713 time_based=1 00:09:28.713 runtime=1 00:09:28.713 ioengine=libaio 00:09:28.713 direct=1 00:09:28.713 bs=4096 00:09:28.713 iodepth=1 00:09:28.713 norandommap=0 00:09:28.713 numjobs=1 00:09:28.713 00:09:28.713 verify_dump=1 00:09:28.713 verify_backlog=512 00:09:28.713 verify_state_save=0 00:09:28.713 do_verify=1 00:09:28.713 verify=crc32c-intel 00:09:28.713 [job0] 00:09:28.713 filename=/dev/nvme0n1 00:09:28.713 Could not set queue depth (nvme0n1) 00:09:28.974 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.974 fio-3.35 00:09:28.974 Starting 1 thread 00:09:30.361 00:09:30.361 job0: (groupid=0, jobs=1): err= 0: pid=1781187: Fri Oct 11 11:46:32 2024 00:09:30.361 read: IOPS=754, BW=3017KiB/s (3089kB/s)(3020KiB/1001msec) 00:09:30.361 slat (nsec): min=6493, max=63303, avg=23246.61, stdev=9045.08 00:09:30.361 clat (usec): min=180, max=937, avg=682.86, stdev=95.64 00:09:30.361 lat (usec): min=186, max=948, avg=706.11, stdev=99.47 00:09:30.361 clat percentiles (usec): 00:09:30.361 | 1.00th=[ 441], 5.00th=[ 529], 10.00th=[ 553], 20.00th=[ 603], 00:09:30.361 | 30.00th=[ 635], 40.00th=[ 660], 50.00th=[ 685], 60.00th=[ 725], 00:09:30.361 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 816], 00:09:30.361 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:09:30.361 | 99.99th=[ 938] 00:09:30.361 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:30.361 slat (usec): min=9, max=24602, avg=45.23, stdev=768.27 00:09:30.361 clat (usec): min=110, max=776, avg=400.26, stdev=124.46 00:09:30.361 lat (usec): min=120, max=25286, avg=445.49, stdev=788.40 00:09:30.361 clat percentiles (usec): 00:09:30.361 | 1.00th=[ 165], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 269], 00:09:30.361 | 30.00th=[ 306], 40.00th=[ 363], 50.00th=[ 396], 60.00th=[ 433], 00:09:30.361 | 70.00th=[ 478], 80.00th=[ 510], 90.00th=[ 578], 95.00th=[ 619], 00:09:30.361 | 99.00th=[ 676], 99.50th=[ 685], 99.90th=[ 717], 99.95th=[ 775], 00:09:30.361 | 99.99th=[ 775] 00:09:30.361 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:30.361 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:30.361 lat (usec) : 250=5.40%, 500=40.64%, 750=41.77%, 1000=12.20% 00:09:30.361 cpu : usr=2.90%, sys=4.90%, ctx=1782, majf=0, minf=1 00:09:30.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.361 issued rwts: total=755,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.361 00:09:30.361 Run status group 0 (all jobs): 00:09:30.361 READ: bw=3017KiB/s (3089kB/s), 3017KiB/s-3017KiB/s (3089kB/s-3089kB/s), io=3020KiB (3092kB), run=1001-1001msec 00:09:30.361 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:09:30.361 00:09:30.361 Disk stats (read/write): 00:09:30.361 nvme0n1: ios=636/1024, merge=0/0, ticks=1357/358, in_queue=1715, util=98.80% 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.361 rmmod nvme_tcp 00:09:30.361 rmmod nvme_fabrics 00:09:30.361 rmmod nvme_keyring 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1779659 ']' 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1779659 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1779659 ']' 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1779659 00:09:30.361 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:30.362 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.362 11:46:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1779659 00:09:30.362 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.362 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.362 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1779659' 00:09:30.362 killing process with pid 1779659 00:09:30.362 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1779659 00:09:30.362 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1779659 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.623 11:46:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.536 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:32.536 00:09:32.536 real 0m18.122s 00:09:32.536 user 0m47.672s 00:09:32.536 sys 0m6.826s 00:09:32.536 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.536 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.536 ************************************ 00:09:32.536 END TEST nvmf_nmic 00:09:32.536 ************************************ 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.797 ************************************ 00:09:32.797 START TEST nvmf_fio_target 00:09:32.797 ************************************ 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:32.797 * Looking for test storage... 00:09:32.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:32.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.797 --rc genhtml_branch_coverage=1 00:09:32.797 --rc genhtml_function_coverage=1 00:09:32.797 --rc genhtml_legend=1 00:09:32.797 --rc geninfo_all_blocks=1 00:09:32.797 --rc geninfo_unexecuted_blocks=1 00:09:32.797 00:09:32.797 ' 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:32.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.797 --rc genhtml_branch_coverage=1 00:09:32.797 --rc genhtml_function_coverage=1 00:09:32.797 --rc genhtml_legend=1 00:09:32.797 --rc geninfo_all_blocks=1 00:09:32.797 --rc geninfo_unexecuted_blocks=1 00:09:32.797 00:09:32.797 ' 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:32.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.797 --rc genhtml_branch_coverage=1 00:09:32.797 --rc genhtml_function_coverage=1 00:09:32.797 --rc genhtml_legend=1 00:09:32.797 --rc geninfo_all_blocks=1 00:09:32.797 --rc geninfo_unexecuted_blocks=1 00:09:32.797 00:09:32.797 ' 00:09:32.797 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:32.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.797 --rc genhtml_branch_coverage=1 00:09:32.797 --rc genhtml_function_coverage=1 00:09:32.797 --rc genhtml_legend=1 00:09:32.797 --rc geninfo_all_blocks=1 00:09:32.798 --rc geninfo_unexecuted_blocks=1 00:09:32.798 00:09:32.798 ' 00:09:32.798 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.798 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.059 11:46:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:41.207 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:41.207 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:41.207 Found net devices under 0000:31:00.0: cvl_0_0 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:41.207 Found net devices under 0000:31:00.1: cvl_0_1 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.207 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:41.208 11:46:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:41.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:09:41.208 00:09:41.208 --- 10.0.0.2 ping statistics --- 00:09:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.208 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:41.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:09:41.208 00:09:41.208 --- 10.0.0.1 ping statistics --- 00:09:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.208 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1785921 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1785921 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1785921 ']' 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.208 11:46:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.208 [2024-10-11 11:46:43.323554] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:41.208 [2024-10-11 11:46:43.323619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.208 [2024-10-11 11:46:43.414862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.208 [2024-10-11 11:46:43.468570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.208 [2024-10-11 11:46:43.468621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.208 [2024-10-11 11:46:43.468630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.208 [2024-10-11 11:46:43.468638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.208 [2024-10-11 11:46:43.468644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.208 [2024-10-11 11:46:43.471112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.208 [2024-10-11 11:46:43.471221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.208 [2024-10-11 11:46:43.471379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.208 [2024-10-11 11:46:43.471382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.470 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.470 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:41.470 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:41.470 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.470 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.732 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.732 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:41.732 [2024-10-11 11:46:44.352221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.732 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.994 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:41.994 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.255 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:42.255 11:46:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.517 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:42.517 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.778 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:42.778 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:42.778 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.040 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:43.040 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.300 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:43.300 11:46:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.561 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:43.561 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:43.821 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.821 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:43.821 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.081 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:44.082 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.343 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.343 [2024-10-11 11:46:46.968588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.343 11:46:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:44.603 11:46:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:44.863 11:46:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:46.248 11:46:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:46.248 11:46:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:46.248 11:46:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.248 11:46:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:46.248 11:46:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:46.248 11:46:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:48.234 11:46:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:48.234 11:46:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:48.234 11:46:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.234 11:46:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:48.234 11:46:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.234 11:46:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:48.234 11:46:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:48.495 [global] 00:09:48.495 thread=1 00:09:48.495 invalidate=1 00:09:48.495 rw=write 00:09:48.495 time_based=1 00:09:48.495 runtime=1 00:09:48.495 ioengine=libaio 00:09:48.495 direct=1 00:09:48.495 bs=4096 00:09:48.495 iodepth=1 00:09:48.495 norandommap=0 00:09:48.495 numjobs=1 00:09:48.495 00:09:48.495 verify_dump=1 00:09:48.495 verify_backlog=512 00:09:48.495 verify_state_save=0 00:09:48.495 do_verify=1 00:09:48.495 verify=crc32c-intel 00:09:48.495 [job0] 00:09:48.495 filename=/dev/nvme0n1 00:09:48.495 [job1] 00:09:48.495 filename=/dev/nvme0n2 00:09:48.495 [job2] 00:09:48.495 filename=/dev/nvme0n3 00:09:48.495 [job3] 00:09:48.495 filename=/dev/nvme0n4 00:09:48.495 Could not set queue depth (nvme0n1) 00:09:48.495 Could not set queue depth (nvme0n2) 00:09:48.495 Could not set queue depth (nvme0n3) 00:09:48.495 Could not set queue depth (nvme0n4) 00:09:48.756 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.756 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.756 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.756 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.756 fio-3.35 00:09:48.756 Starting 4 threads 00:09:50.141 00:09:50.141 job0: (groupid=0, jobs=1): err= 0: pid=1787661: Fri Oct 11 11:46:52 2024 00:09:50.141 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:50.141 slat (nsec): min=6645, max=45583, avg=26347.45, stdev=5642.03 00:09:50.141 clat (usec): min=210, max=1194, avg=946.04, stdev=162.01 00:09:50.141 lat (usec): min=217, max=1221, avg=972.38, stdev=166.24 00:09:50.141 clat percentiles (usec): 00:09:50.141 | 1.00th=[ 412], 5.00th=[ 490], 10.00th=[ 816], 20.00th=[ 914], 00:09:50.141 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:09:50.141 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:09:50.141 | 99.00th=[ 1139], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1188], 00:09:50.141 | 99.99th=[ 1188] 00:09:50.141 write: IOPS=964, BW=3856KiB/s (3949kB/s)(3860KiB/1001msec); 0 zone resets 00:09:50.141 slat (usec): min=9, max=119, avg=29.47, stdev=11.61 00:09:50.141 clat (usec): min=91, max=959, avg=480.04, stdev=176.01 00:09:50.141 lat (usec): min=102, max=994, avg=509.51, stdev=180.47 00:09:50.141 clat percentiles (usec): 00:09:50.141 | 1.00th=[ 121], 5.00th=[ 243], 10.00th=[ 269], 20.00th=[ 318], 00:09:50.141 | 30.00th=[ 359], 40.00th=[ 388], 50.00th=[ 445], 60.00th=[ 537], 00:09:50.141 | 70.00th=[ 603], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 775], 00:09:50.141 | 99.00th=[ 840], 99.50th=[ 898], 99.90th=[ 963], 99.95th=[ 963], 00:09:50.141 | 99.99th=[ 963] 00:09:50.141 bw ( KiB/s): min= 4096, max= 4096, per=31.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.141 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.141 lat (usec) : 100=0.07%, 250=4.33%, 500=34.06%, 750=25.73%, 1000=21.06% 00:09:50.141 lat (msec) : 2=14.76% 00:09:50.141 cpu : usr=3.20%, sys=4.60%, ctx=1479, majf=0, minf=1 00:09:50.141 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.141 issued rwts: total=512,965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.141 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.141 job1: (groupid=0, jobs=1): err= 0: pid=1787678: Fri Oct 11 11:46:52 2024 00:09:50.141 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:50.141 slat (nsec): min=25692, max=44588, avg=26511.12, stdev=2207.27 00:09:50.141 clat (usec): min=720, max=1223, avg=993.88, stdev=89.54 00:09:50.141 lat (usec): min=746, max=1249, avg=1020.39, stdev=89.45 00:09:50.141 clat percentiles (usec): 00:09:50.141 | 1.00th=[ 742], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 930], 00:09:50.141 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:09:50.141 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:09:50.141 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:50.141 | 99.99th=[ 1221] 00:09:50.141 write: IOPS=713, BW=2853KiB/s (2922kB/s)(2856KiB/1001msec); 0 zone resets 00:09:50.141 slat (nsec): min=9949, max=64841, avg=30808.42, stdev=9924.41 00:09:50.141 clat (usec): min=233, max=1211, avg=624.92, stdev=138.09 00:09:50.141 lat (usec): min=245, max=1247, avg=655.73, stdev=142.09 00:09:50.141 clat percentiles (usec): 00:09:50.141 | 1.00th=[ 310], 5.00th=[ 388], 10.00th=[ 445], 20.00th=[ 506], 00:09:50.141 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:09:50.141 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 840], 00:09:50.141 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1205], 99.95th=[ 1205], 00:09:50.141 | 99.99th=[ 1205] 00:09:50.142 bw ( KiB/s): min= 4096, max= 4096, per=31.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.142 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.142 lat (usec) : 250=0.08%, 500=11.09%, 750=37.19%, 1000=27.24% 00:09:50.142 lat (msec) : 2=24.39% 00:09:50.142 cpu : usr=1.60%, sys=4.00%, ctx=1227, majf=0, minf=1 00:09:50.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.142 issued rwts: total=512,714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.142 job2: (groupid=0, jobs=1): err= 0: pid=1787698: Fri Oct 11 11:46:52 2024 00:09:50.142 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:50.142 slat (nsec): min=6570, max=44596, avg=20189.01, stdev=9503.39 00:09:50.142 clat (usec): min=143, max=40631, avg=593.89, stdev=1261.87 00:09:50.142 lat (usec): min=150, max=40637, avg=614.08, stdev=1261.96 00:09:50.142 clat percentiles (usec): 00:09:50.142 | 1.00th=[ 221], 5.00th=[ 265], 10.00th=[ 363], 20.00th=[ 437], 00:09:50.142 | 30.00th=[ 469], 40.00th=[ 510], 50.00th=[ 562], 60.00th=[ 619], 00:09:50.142 | 70.00th=[ 644], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 840], 00:09:50.142 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[40633], 00:09:50.142 | 99.99th=[40633] 00:09:50.142 write: IOPS=1130, BW=4523KiB/s (4632kB/s)(4528KiB/1001msec); 0 zone resets 00:09:50.142 slat (usec): min=9, max=299, avg=25.03, stdev=14.92 00:09:50.142 clat (usec): min=102, max=705, avg=291.06, stdev=157.72 00:09:50.142 lat (usec): min=114, max=742, avg=316.09, stdev=167.50 00:09:50.142 clat percentiles (usec): 00:09:50.142 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 126], 00:09:50.142 | 30.00th=[ 137], 40.00th=[ 237], 50.00th=[ 251], 60.00th=[ 318], 00:09:50.142 | 70.00th=[ 392], 80.00th=[ 429], 90.00th=[ 545], 95.00th=[ 594], 00:09:50.142 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 693], 99.95th=[ 709], 00:09:50.142 | 99.99th=[ 709] 00:09:50.142 bw ( KiB/s): min= 5968, max= 5968, per=45.21%, avg=5968.00, stdev= 0.00, samples=1 00:09:50.142 iops : min= 1492, max= 1492, avg=1492.00, stdev= 0.00, samples=1 00:09:50.142 lat (usec) : 250=27.64%, 500=36.55%, 750=31.86%, 1000=3.90% 00:09:50.142 lat (msec) : 50=0.05% 00:09:50.142 cpu : usr=3.70%, sys=4.00%, ctx=2159, majf=0, minf=1 00:09:50.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.142 issued rwts: total=1024,1132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.142 job3: (groupid=0, jobs=1): err= 0: pid=1787705: Fri Oct 11 11:46:52 2024 00:09:50.142 read: IOPS=14, BW=59.6KiB/s (61.0kB/s)(60.0KiB/1007msec) 00:09:50.142 slat (nsec): min=26316, max=27005, avg=26556.60, stdev=191.72 00:09:50.142 clat (usec): min=40960, max=42203, avg=41782.65, stdev=379.32 00:09:50.142 lat (usec): min=40986, max=42229, avg=41809.21, stdev=379.36 00:09:50.142 clat percentiles (usec): 00:09:50.142 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:50.142 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:50.142 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:50.142 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:50.142 | 99.99th=[42206] 00:09:50.142 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:09:50.142 slat (nsec): min=9669, max=67472, avg=32630.87, stdev=7579.34 00:09:50.142 clat (usec): min=294, max=1411, avg=702.15, stdev=148.62 00:09:50.142 lat (usec): min=306, max=1463, avg=734.78, stdev=149.92 00:09:50.142 clat percentiles (usec): 00:09:50.142 | 1.00th=[ 371], 5.00th=[ 461], 10.00th=[ 510], 20.00th=[ 570], 00:09:50.142 | 30.00th=[ 611], 40.00th=[ 668], 50.00th=[ 709], 60.00th=[ 758], 00:09:50.142 | 70.00th=[ 799], 80.00th=[ 840], 90.00th=[ 889], 95.00th=[ 914], 00:09:50.142 | 99.00th=[ 971], 99.50th=[ 1037], 99.90th=[ 1418], 99.95th=[ 1418], 00:09:50.142 | 99.99th=[ 1418] 00:09:50.142 bw ( KiB/s): min= 4096, max= 4096, per=31.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.142 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.142 lat (usec) : 500=8.73%, 750=48.96%, 1000=38.90% 00:09:50.142 lat (msec) : 2=0.57%, 50=2.85% 00:09:50.142 cpu : usr=1.09%, sys=2.09%, ctx=527, majf=0, minf=2 00:09:50.142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.142 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.142 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.142 00:09:50.142 Run status group 0 (all jobs): 00:09:50.142 READ: bw=8195KiB/s (8391kB/s), 59.6KiB/s-4092KiB/s (61.0kB/s-4190kB/s), io=8252KiB (8450kB), run=1001-1007msec 00:09:50.142 WRITE: bw=12.9MiB/s (13.5MB/s), 2034KiB/s-4523KiB/s (2083kB/s-4632kB/s), io=13.0MiB (13.6MB), run=1001-1007msec 00:09:50.142 00:09:50.142 Disk stats (read/write): 00:09:50.142 nvme0n1: ios=555/512, merge=0/0, ticks=755/244, in_queue=999, util=83.87% 00:09:50.142 nvme0n2: ios=528/512, merge=0/0, ticks=578/297, in_queue=875, util=90.81% 00:09:50.142 nvme0n3: ios=902/1024, merge=0/0, ticks=601/275, in_queue=876, util=95.03% 00:09:50.142 nvme0n4: ios=67/512, merge=0/0, ticks=530/288, in_queue=818, util=97.43% 00:09:50.142 11:46:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:50.142 [global] 00:09:50.142 thread=1 00:09:50.142 invalidate=1 00:09:50.142 rw=randwrite 00:09:50.142 time_based=1 00:09:50.142 runtime=1 00:09:50.142 ioengine=libaio 00:09:50.142 direct=1 00:09:50.142 bs=4096 00:09:50.142 iodepth=1 00:09:50.142 norandommap=0 00:09:50.142 numjobs=1 00:09:50.142 00:09:50.142 verify_dump=1 00:09:50.142 verify_backlog=512 00:09:50.142 verify_state_save=0 00:09:50.142 do_verify=1 00:09:50.142 verify=crc32c-intel 00:09:50.142 [job0] 00:09:50.142 filename=/dev/nvme0n1 00:09:50.142 [job1] 00:09:50.142 filename=/dev/nvme0n2 00:09:50.142 [job2] 00:09:50.142 filename=/dev/nvme0n3 00:09:50.142 [job3] 00:09:50.142 filename=/dev/nvme0n4 00:09:50.142 Could not set queue depth (nvme0n1) 00:09:50.142 Could not set queue depth (nvme0n2) 00:09:50.142 Could not set queue depth (nvme0n3) 00:09:50.142 Could not set queue depth (nvme0n4) 00:09:50.403 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.403 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.403 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.403 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.403 fio-3.35 00:09:50.403 Starting 4 threads 00:09:51.790 00:09:51.790 job0: (groupid=0, jobs=1): err= 0: pid=1788159: Fri Oct 11 11:46:54 2024 00:09:51.790 read: IOPS=18, BW=73.5KiB/s (75.3kB/s)(76.0KiB/1034msec) 00:09:51.790 slat (nsec): min=27247, max=28346, avg=27612.21, stdev=332.26 00:09:51.790 clat (usec): min=902, max=42048, avg=39363.60, stdev=9323.72 00:09:51.790 lat (usec): min=930, max=42076, avg=39391.21, stdev=9323.61 00:09:51.790 clat percentiles (usec): 00:09:51.790 | 1.00th=[ 906], 5.00th=[ 906], 10.00th=[41157], 20.00th=[41157], 00:09:51.790 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:51.790 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:51.790 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:51.790 | 99.99th=[42206] 00:09:51.790 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:51.790 slat (nsec): min=9377, max=54600, avg=27763.44, stdev=10874.07 00:09:51.790 clat (usec): min=194, max=1542, avg=521.81, stdev=160.84 00:09:51.790 lat (usec): min=228, max=1581, avg=549.58, stdev=163.51 00:09:51.790 clat percentiles (usec): 00:09:51.790 | 1.00th=[ 208], 5.00th=[ 273], 10.00th=[ 306], 20.00th=[ 375], 00:09:51.790 | 30.00th=[ 441], 40.00th=[ 490], 50.00th=[ 523], 60.00th=[ 562], 00:09:51.790 | 70.00th=[ 611], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 750], 00:09:51.790 | 99.00th=[ 832], 99.50th=[ 963], 99.90th=[ 1549], 99.95th=[ 1549], 00:09:51.790 | 99.99th=[ 1549] 00:09:51.790 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:51.790 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:51.790 lat (usec) : 250=2.82%, 500=38.98%, 750=50.28%, 1000=4.14% 00:09:51.790 lat (msec) : 2=0.38%, 50=3.39% 00:09:51.790 cpu : usr=0.68%, sys=1.84%, ctx=533, majf=0, minf=1 00:09:51.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.790 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.790 job1: (groupid=0, jobs=1): err= 0: pid=1788176: Fri Oct 11 11:46:54 2024 00:09:51.790 read: IOPS=474, BW=1898KiB/s (1944kB/s)(1900KiB/1001msec) 00:09:51.790 slat (nsec): min=6908, max=47311, avg=24452.62, stdev=3451.76 00:09:51.790 clat (usec): min=600, max=41246, avg=1412.46, stdev=4486.41 00:09:51.790 lat (usec): min=625, max=41270, avg=1436.92, stdev=4486.43 00:09:51.790 clat percentiles (usec): 00:09:51.790 | 1.00th=[ 652], 5.00th=[ 717], 10.00th=[ 758], 20.00th=[ 824], 00:09:51.790 | 30.00th=[ 857], 40.00th=[ 889], 50.00th=[ 922], 60.00th=[ 947], 00:09:51.790 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1090], 00:09:51.790 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:51.790 | 99.99th=[41157] 00:09:51.790 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:51.790 slat (nsec): min=9255, max=75915, avg=29180.21, stdev=8012.20 00:09:51.790 clat (usec): min=133, max=1029, avg=575.42, stdev=146.20 00:09:51.790 lat (usec): min=144, max=1060, avg=604.60, stdev=148.12 00:09:51.790 clat percentiles (usec): 00:09:51.790 | 1.00th=[ 258], 5.00th=[ 338], 10.00th=[ 400], 20.00th=[ 449], 00:09:51.790 | 30.00th=[ 502], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 603], 00:09:51.790 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 758], 95.00th=[ 824], 00:09:51.790 | 99.00th=[ 930], 99.50th=[ 1012], 99.90th=[ 1029], 99.95th=[ 1029], 00:09:51.790 | 99.99th=[ 1029] 00:09:51.790 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:51.790 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:51.790 lat (usec) : 250=0.51%, 500=14.69%, 750=35.06%, 1000=41.84% 00:09:51.790 lat (msec) : 2=7.29%, 50=0.61% 00:09:51.790 cpu : usr=1.20%, sys=3.10%, ctx=988, majf=0, minf=1 00:09:51.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.790 issued rwts: total=475,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.790 job2: (groupid=0, jobs=1): err= 0: pid=1788200: Fri Oct 11 11:46:54 2024 00:09:51.790 read: IOPS=637, BW=2549KiB/s (2611kB/s)(2552KiB/1001msec) 00:09:51.790 slat (nsec): min=4450, max=43986, avg=10076.53, stdev=8372.26 00:09:51.790 clat (usec): min=152, max=41001, avg=883.61, stdev=3555.47 00:09:51.790 lat (usec): min=158, max=41012, avg=893.69, stdev=3555.33 00:09:51.790 clat percentiles (usec): 00:09:51.790 | 1.00th=[ 180], 5.00th=[ 241], 10.00th=[ 347], 20.00th=[ 416], 00:09:51.790 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:09:51.790 | 70.00th=[ 644], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 832], 00:09:51.790 | 99.00th=[ 930], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:09:51.790 | 99.99th=[41157] 00:09:51.790 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:51.790 slat (nsec): min=5038, max=52179, avg=21841.03, stdev=12177.48 00:09:51.790 clat (usec): min=100, max=1041, avg=390.89, stdev=159.52 00:09:51.790 lat (usec): min=107, max=1054, avg=412.73, stdev=163.75 00:09:51.790 clat percentiles (usec): 00:09:51.790 | 1.00th=[ 105], 5.00th=[ 119], 10.00th=[ 137], 20.00th=[ 258], 00:09:51.790 | 30.00th=[ 338], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 404], 00:09:51.790 | 70.00th=[ 482], 80.00th=[ 545], 90.00th=[ 586], 95.00th=[ 619], 00:09:51.790 | 99.00th=[ 766], 99.50th=[ 857], 99.90th=[ 955], 99.95th=[ 1045], 00:09:51.790 | 99.99th=[ 1045] 00:09:51.790 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:51.790 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:51.790 lat (usec) : 250=14.56%, 500=40.07%, 750=38.15%, 1000=6.86% 00:09:51.790 lat (msec) : 2=0.06%, 50=0.30% 00:09:51.790 cpu : usr=1.50%, sys=3.00%, ctx=1666, majf=0, minf=1 00:09:51.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.790 issued rwts: total=638,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.790 job3: (groupid=0, jobs=1): err= 0: pid=1788209: Fri Oct 11 11:46:54 2024 00:09:51.790 read: IOPS=16, BW=66.3KiB/s (67.9kB/s)(68.0KiB/1026msec) 00:09:51.790 slat (nsec): min=24758, max=25407, avg=25061.06, stdev=205.20 00:09:51.790 clat (usec): min=41033, max=42075, avg=41793.41, stdev=360.77 00:09:51.790 lat (usec): min=41058, max=42100, avg=41818.47, stdev=360.70 00:09:51.790 clat percentiles (usec): 00:09:51.790 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:51.790 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:51.790 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:51.790 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:51.790 | 99.99th=[42206] 00:09:51.790 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:09:51.790 slat (nsec): min=9131, max=85164, avg=28568.89, stdev=8892.70 00:09:51.790 clat (usec): min=128, max=905, avg=578.39, stdev=121.51 00:09:51.790 lat (usec): min=139, max=952, avg=606.96, stdev=124.59 00:09:51.790 clat percentiles (usec): 00:09:51.790 | 1.00th=[ 293], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 474], 00:09:51.790 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 619], 00:09:51.790 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 775], 00:09:51.790 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:09:51.790 | 99.99th=[ 906] 00:09:51.790 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:51.790 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:51.790 lat (usec) : 250=0.19%, 500=24.01%, 750=65.97%, 1000=6.62% 00:09:51.790 lat (msec) : 50=3.21% 00:09:51.790 cpu : usr=0.49%, sys=1.66%, ctx=530, majf=0, minf=1 00:09:51.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.790 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.790 00:09:51.790 Run status group 0 (all jobs): 00:09:51.790 READ: bw=4445KiB/s (4552kB/s), 66.3KiB/s-2549KiB/s (67.9kB/s-2611kB/s), io=4596KiB (4706kB), run=1001-1034msec 00:09:51.790 WRITE: bw=9903KiB/s (10.1MB/s), 1981KiB/s-4092KiB/s (2028kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1034msec 00:09:51.790 00:09:51.790 Disk stats (read/write): 00:09:51.790 nvme0n1: ios=57/512, merge=0/0, ticks=627/235, in_queue=862, util=85.47% 00:09:51.790 nvme0n2: ios=464/512, merge=0/0, ticks=583/276, in_queue=859, util=91.11% 00:09:51.790 nvme0n3: ios=569/828, merge=0/0, ticks=609/303, in_queue=912, util=95.65% 00:09:51.790 nvme0n4: ios=69/512, merge=0/0, ticks=607/273, in_queue=880, util=97.53% 00:09:51.790 11:46:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:51.790 [global] 00:09:51.790 thread=1 00:09:51.790 invalidate=1 00:09:51.790 rw=write 00:09:51.790 time_based=1 00:09:51.790 runtime=1 00:09:51.790 ioengine=libaio 00:09:51.790 direct=1 00:09:51.790 bs=4096 00:09:51.791 iodepth=128 00:09:51.791 norandommap=0 00:09:51.791 numjobs=1 00:09:51.791 00:09:51.791 verify_dump=1 00:09:51.791 verify_backlog=512 00:09:51.791 verify_state_save=0 00:09:51.791 do_verify=1 00:09:51.791 verify=crc32c-intel 00:09:51.791 [job0] 00:09:51.791 filename=/dev/nvme0n1 00:09:51.791 [job1] 00:09:51.791 filename=/dev/nvme0n2 00:09:51.791 [job2] 00:09:51.791 filename=/dev/nvme0n3 00:09:51.791 [job3] 00:09:51.791 filename=/dev/nvme0n4 00:09:51.791 Could not set queue depth (nvme0n1) 00:09:51.791 Could not set queue depth (nvme0n2) 00:09:51.791 Could not set queue depth (nvme0n3) 00:09:51.791 Could not set queue depth (nvme0n4) 00:09:52.051 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.051 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.051 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.051 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.051 fio-3.35 00:09:52.051 Starting 4 threads 00:09:53.440 00:09:53.440 job0: (groupid=0, jobs=1): err= 0: pid=1788666: Fri Oct 11 11:46:55 2024 00:09:53.440 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:09:53.440 slat (nsec): min=946, max=15020k, avg=107980.75, stdev=676391.58 00:09:53.440 clat (usec): min=2785, max=41958, avg=13814.97, stdev=5602.55 00:09:53.440 lat (usec): min=2788, max=41985, avg=13922.95, stdev=5664.44 00:09:53.440 clat percentiles (usec): 00:09:53.440 | 1.00th=[ 3589], 5.00th=[ 4883], 10.00th=[ 6063], 20.00th=[ 8979], 00:09:53.440 | 30.00th=[12256], 40.00th=[13304], 50.00th=[14484], 60.00th=[15270], 00:09:53.440 | 70.00th=[15533], 80.00th=[16712], 90.00th=[19006], 95.00th=[23987], 00:09:53.440 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35390], 99.95th=[39060], 00:09:53.440 | 99.99th=[42206] 00:09:53.440 write: IOPS=5024, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1005msec); 0 zone resets 00:09:53.440 slat (nsec): min=1626, max=8657.9k, avg=93125.62, stdev=496670.05 00:09:53.440 clat (usec): min=1328, max=35519, avg=12640.58, stdev=5800.59 00:09:53.440 lat (usec): min=1339, max=35552, avg=12733.70, stdev=5845.47 00:09:53.440 clat percentiles (usec): 00:09:53.440 | 1.00th=[ 3294], 5.00th=[ 4293], 10.00th=[ 5735], 20.00th=[ 6063], 00:09:53.440 | 30.00th=[ 8848], 40.00th=[11207], 50.00th=[12387], 60.00th=[14615], 00:09:53.440 | 70.00th=[15401], 80.00th=[17433], 90.00th=[20055], 95.00th=[22938], 00:09:53.440 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27395], 99.95th=[30278], 00:09:53.440 | 99.99th=[35390] 00:09:53.440 bw ( KiB/s): min=16512, max=22864, per=20.76%, avg=19688.00, stdev=4491.54, samples=2 00:09:53.440 iops : min= 4128, max= 5716, avg=4922.00, stdev=1122.89, samples=2 00:09:53.440 lat (msec) : 2=0.14%, 4=2.37%, 10=25.18%, 20=63.54%, 50=8.76% 00:09:53.440 cpu : usr=3.78%, sys=4.88%, ctx=548, majf=0, minf=2 00:09:53.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:53.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.440 issued rwts: total=4608,5050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.440 job1: (groupid=0, jobs=1): err= 0: pid=1788680: Fri Oct 11 11:46:55 2024 00:09:53.440 read: IOPS=9151, BW=35.7MiB/s (37.5MB/s)(36.0MiB/1007msec) 00:09:53.440 slat (nsec): min=916, max=18009k, avg=54187.51, stdev=479543.08 00:09:53.440 clat (usec): min=2758, max=53610, avg=7693.54, stdev=5002.41 00:09:53.440 lat (usec): min=2760, max=53636, avg=7747.73, stdev=5039.52 00:09:53.440 clat percentiles (usec): 00:09:53.440 | 1.00th=[ 3261], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5342], 00:09:53.440 | 30.00th=[ 5866], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 7177], 00:09:53.440 | 70.00th=[ 7439], 80.00th=[ 8455], 90.00th=[ 9896], 95.00th=[13960], 00:09:53.440 | 99.00th=[34866], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:09:53.440 | 99.99th=[53740] 00:09:53.440 write: IOPS=9308, BW=36.4MiB/s (38.1MB/s)(36.6MiB/1007msec); 0 zone resets 00:09:53.440 slat (nsec): min=1613, max=6978.7k, avg=43520.48, stdev=307074.15 00:09:53.440 clat (usec): min=665, max=18443, avg=6071.09, stdev=1905.59 00:09:53.440 lat (usec): min=705, max=18477, avg=6114.61, stdev=1922.53 00:09:53.440 clat percentiles (usec): 00:09:53.440 | 1.00th=[ 2311], 5.00th=[ 3425], 10.00th=[ 3720], 20.00th=[ 4752], 00:09:53.440 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5932], 60.00th=[ 6325], 00:09:53.440 | 70.00th=[ 6718], 80.00th=[ 7046], 90.00th=[ 7701], 95.00th=[ 9241], 00:09:53.440 | 99.00th=[12649], 99.50th=[14615], 99.90th=[15139], 99.95th=[15270], 00:09:53.440 | 99.99th=[18482] 00:09:53.440 bw ( KiB/s): min=36864, max=37112, per=39.01%, avg=36988.00, stdev=175.36, samples=2 00:09:53.440 iops : min= 9216, max= 9278, avg=9247.00, stdev=43.84, samples=2 00:09:53.440 lat (usec) : 750=0.02% 00:09:53.440 lat (msec) : 2=0.33%, 4=8.08%, 10=84.94%, 20=5.49%, 50=1.13% 00:09:53.440 lat (msec) : 100=0.01% 00:09:53.440 cpu : usr=5.47%, sys=9.24%, ctx=676, majf=0, minf=2 00:09:53.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:53.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.440 issued rwts: total=9216,9374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.440 job2: (groupid=0, jobs=1): err= 0: pid=1788699: Fri Oct 11 11:46:55 2024 00:09:53.440 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:09:53.440 slat (nsec): min=986, max=11022k, avg=79333.44, stdev=552123.51 00:09:53.440 clat (usec): min=1679, max=38473, avg=9852.67, stdev=4844.77 00:09:53.440 lat (usec): min=1691, max=38476, avg=9932.00, stdev=4892.68 00:09:53.440 clat percentiles (usec): 00:09:53.440 | 1.00th=[ 2573], 5.00th=[ 4817], 10.00th=[ 6194], 20.00th=[ 7308], 00:09:53.440 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 8848], 60.00th=[ 9503], 00:09:53.440 | 70.00th=[10290], 80.00th=[11731], 90.00th=[13304], 95.00th=[19792], 00:09:53.440 | 99.00th=[31327], 99.50th=[33424], 99.90th=[36439], 99.95th=[38536], 00:09:53.440 | 99.99th=[38536] 00:09:53.440 write: IOPS=5823, BW=22.7MiB/s (23.9MB/s)(22.9MiB/1007msec); 0 zone resets 00:09:53.440 slat (nsec): min=1696, max=6286.9k, avg=86201.09, stdev=430166.80 00:09:53.440 clat (usec): min=1245, max=38477, avg=12279.53, stdev=8048.12 00:09:53.440 lat (usec): min=1535, max=38490, avg=12365.73, stdev=8101.66 00:09:53.440 clat percentiles (usec): 00:09:53.440 | 1.00th=[ 2409], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 6521], 00:09:53.440 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8848], 00:09:53.440 | 70.00th=[15795], 80.00th=[20579], 90.00th=[25297], 95.00th=[28967], 00:09:53.440 | 99.00th=[32900], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:09:53.440 | 99.99th=[38536] 00:09:53.440 bw ( KiB/s): min=16384, max=29504, per=24.20%, avg=22944.00, stdev=9277.24, samples=2 00:09:53.440 iops : min= 4096, max= 7376, avg=5736.00, stdev=2319.31, samples=2 00:09:53.440 lat (msec) : 2=0.56%, 4=3.12%, 10=61.51%, 20=19.17%, 50=15.64% 00:09:53.440 cpu : usr=3.88%, sys=6.16%, ctx=540, majf=0, minf=1 00:09:53.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:53.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.440 issued rwts: total=5632,5864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.440 job3: (groupid=0, jobs=1): err= 0: pid=1788708: Fri Oct 11 11:46:55 2024 00:09:53.440 read: IOPS=3141, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1005msec) 00:09:53.440 slat (nsec): min=972, max=10932k, avg=151810.10, stdev=791774.97 00:09:53.440 clat (usec): min=1592, max=51171, avg=18417.08, stdev=6385.82 00:09:53.440 lat (usec): min=5716, max=51200, avg=18568.89, stdev=6448.42 00:09:53.440 clat percentiles (usec): 00:09:53.440 | 1.00th=[ 6259], 5.00th=[13304], 10.00th=[13566], 20.00th=[14353], 00:09:53.440 | 30.00th=[15139], 40.00th=[15401], 50.00th=[16712], 60.00th=[17957], 00:09:53.440 | 70.00th=[19530], 80.00th=[21103], 90.00th=[24511], 95.00th=[32113], 00:09:53.440 | 99.00th=[46924], 99.50th=[46924], 99.90th=[47449], 99.95th=[51119], 00:09:53.440 | 99.99th=[51119] 00:09:53.440 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:53.440 slat (nsec): min=1702, max=24752k, avg=140573.33, stdev=700735.12 00:09:53.440 clat (usec): min=9222, max=50896, avg=18445.74, stdev=7130.16 00:09:53.440 lat (usec): min=9232, max=50916, avg=18586.31, stdev=7176.29 00:09:53.440 clat percentiles (usec): 00:09:53.440 | 1.00th=[ 9372], 5.00th=[10814], 10.00th=[11994], 20.00th=[14353], 00:09:53.440 | 30.00th=[14877], 40.00th=[15533], 50.00th=[16712], 60.00th=[17695], 00:09:53.440 | 70.00th=[20055], 80.00th=[21103], 90.00th=[24511], 95.00th=[30278], 00:09:53.440 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:09:53.440 | 99.99th=[51119] 00:09:53.440 bw ( KiB/s): min=12288, max=16040, per=14.94%, avg=14164.00, stdev=2653.06, samples=2 00:09:53.440 iops : min= 3072, max= 4010, avg=3541.00, stdev=663.27, samples=2 00:09:53.440 lat (msec) : 2=0.01%, 10=1.71%, 20=69.77%, 50=27.95%, 100=0.56% 00:09:53.440 cpu : usr=3.09%, sys=3.78%, ctx=469, majf=0, minf=1 00:09:53.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:53.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.440 issued rwts: total=3157,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.440 00:09:53.440 Run status group 0 (all jobs): 00:09:53.440 READ: bw=87.7MiB/s (92.0MB/s), 12.3MiB/s-35.7MiB/s (12.9MB/s-37.5MB/s), io=88.3MiB (92.6MB), run=1005-1007msec 00:09:53.440 WRITE: bw=92.6MiB/s (97.1MB/s), 13.9MiB/s-36.4MiB/s (14.6MB/s-38.1MB/s), io=93.2MiB (97.8MB), run=1005-1007msec 00:09:53.440 00:09:53.440 Disk stats (read/write): 00:09:53.440 nvme0n1: ios=4145/4183, merge=0/0, ticks=24894/21486, in_queue=46380, util=84.27% 00:09:53.440 nvme0n2: ios=7733/7871, merge=0/0, ticks=49344/41779, in_queue=91123, util=88.58% 00:09:53.440 nvme0n3: ios=4145/4557, merge=0/0, ticks=39545/59339, in_queue=98884, util=95.25% 00:09:53.440 nvme0n4: ios=2611/2815, merge=0/0, ticks=16407/16534, in_queue=32941, util=95.41% 00:09:53.440 11:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:53.440 [global] 00:09:53.440 thread=1 00:09:53.440 invalidate=1 00:09:53.440 rw=randwrite 00:09:53.440 time_based=1 00:09:53.440 runtime=1 00:09:53.440 ioengine=libaio 00:09:53.440 direct=1 00:09:53.440 bs=4096 00:09:53.440 iodepth=128 00:09:53.440 norandommap=0 00:09:53.440 numjobs=1 00:09:53.440 00:09:53.440 verify_dump=1 00:09:53.440 verify_backlog=512 00:09:53.440 verify_state_save=0 00:09:53.440 do_verify=1 00:09:53.440 verify=crc32c-intel 00:09:53.440 [job0] 00:09:53.440 filename=/dev/nvme0n1 00:09:53.440 [job1] 00:09:53.440 filename=/dev/nvme0n2 00:09:53.440 [job2] 00:09:53.440 filename=/dev/nvme0n3 00:09:53.440 [job3] 00:09:53.440 filename=/dev/nvme0n4 00:09:53.440 Could not set queue depth (nvme0n1) 00:09:53.440 Could not set queue depth (nvme0n2) 00:09:53.440 Could not set queue depth (nvme0n3) 00:09:53.440 Could not set queue depth (nvme0n4) 00:09:53.701 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.701 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.701 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.701 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:53.701 fio-3.35 00:09:53.701 Starting 4 threads 00:09:55.089 00:09:55.089 job0: (groupid=0, jobs=1): err= 0: pid=1789153: Fri Oct 11 11:46:57 2024 00:09:55.089 read: IOPS=6354, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1002msec) 00:09:55.089 slat (nsec): min=918, max=44578k, avg=79983.93, stdev=738421.70 00:09:55.089 clat (usec): min=1115, max=51286, avg=10247.12, stdev=6676.61 00:09:55.089 lat (usec): min=1459, max=51288, avg=10327.10, stdev=6721.10 00:09:55.089 clat percentiles (usec): 00:09:55.089 | 1.00th=[ 3982], 5.00th=[ 5604], 10.00th=[ 6718], 20.00th=[ 7373], 00:09:55.090 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:09:55.090 | 70.00th=[ 9241], 80.00th=[10683], 90.00th=[15139], 95.00th=[23725], 00:09:55.090 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:09:55.090 | 99.99th=[51119] 00:09:55.090 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:09:55.090 slat (nsec): min=1565, max=45832k, avg=67387.97, stdev=677295.87 00:09:55.090 clat (usec): min=1482, max=61961, avg=9268.61, stdev=7679.33 00:09:55.090 lat (usec): min=1485, max=61987, avg=9336.00, stdev=7710.88 00:09:55.090 clat percentiles (usec): 00:09:55.090 | 1.00th=[ 3621], 5.00th=[ 4490], 10.00th=[ 5538], 20.00th=[ 6718], 00:09:55.090 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8029], 00:09:55.090 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[11469], 95.00th=[16712], 00:09:55.090 | 99.00th=[56361], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:09:55.090 | 99.99th=[62129] 00:09:55.090 bw ( KiB/s): min=20528, max=32720, per=25.24%, avg=26624.00, stdev=8621.05, samples=2 00:09:55.090 iops : min= 5132, max= 8180, avg=6656.00, stdev=2155.26, samples=2 00:09:55.090 lat (msec) : 2=0.22%, 4=1.31%, 10=78.94%, 20=14.26%, 50=4.08% 00:09:55.090 lat (msec) : 100=1.20% 00:09:55.090 cpu : usr=3.30%, sys=6.09%, ctx=586, majf=0, minf=1 00:09:55.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:55.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.090 issued rwts: total=6367,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.090 job1: (groupid=0, jobs=1): err= 0: pid=1789172: Fri Oct 11 11:46:57 2024 00:09:55.090 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.1MiB/1007msec) 00:09:55.090 slat (nsec): min=956, max=25077k, avg=116281.30, stdev=848576.98 00:09:55.090 clat (usec): min=3573, max=66484, avg=14192.23, stdev=10974.04 00:09:55.090 lat (usec): min=3582, max=66513, avg=14308.52, stdev=11069.93 00:09:55.090 clat percentiles (usec): 00:09:55.090 | 1.00th=[ 4817], 5.00th=[ 6915], 10.00th=[ 7570], 20.00th=[ 8455], 00:09:55.090 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11076], 00:09:55.090 | 70.00th=[11994], 80.00th=[14353], 90.00th=[30540], 95.00th=[43779], 00:09:55.090 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[58459], 00:09:55.090 | 99.99th=[66323] 00:09:55.090 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:09:55.090 slat (nsec): min=1574, max=11821k, avg=84173.99, stdev=467968.93 00:09:55.090 clat (usec): min=2619, max=49943, avg=12107.73, stdev=6496.28 00:09:55.090 lat (usec): min=2628, max=49945, avg=12191.91, stdev=6535.65 00:09:55.090 clat percentiles (usec): 00:09:55.090 | 1.00th=[ 4015], 5.00th=[ 6325], 10.00th=[ 7308], 20.00th=[ 7898], 00:09:55.090 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 9241], 60.00th=[10683], 00:09:55.090 | 70.00th=[13435], 80.00th=[15008], 90.00th=[22152], 95.00th=[26084], 00:09:55.090 | 99.00th=[31065], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:09:55.090 | 99.99th=[50070] 00:09:55.090 bw ( KiB/s): min=19576, max=20480, per=18.98%, avg=20028.00, stdev=639.22, samples=2 00:09:55.090 iops : min= 4894, max= 5120, avg=5007.00, stdev=159.81, samples=2 00:09:55.090 lat (msec) : 4=0.52%, 10=50.71%, 20=34.55%, 50=13.31%, 100=0.90% 00:09:55.090 cpu : usr=4.27%, sys=4.08%, ctx=568, majf=0, minf=1 00:09:55.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:55.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.090 issued rwts: total=4622,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.090 job2: (groupid=0, jobs=1): err= 0: pid=1789192: Fri Oct 11 11:46:57 2024 00:09:55.090 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:09:55.090 slat (nsec): min=992, max=13067k, avg=76448.20, stdev=580467.53 00:09:55.090 clat (usec): min=2713, max=34696, avg=10275.99, stdev=3911.73 00:09:55.090 lat (usec): min=2735, max=34721, avg=10352.44, stdev=3951.32 00:09:55.090 clat percentiles (usec): 00:09:55.090 | 1.00th=[ 4015], 5.00th=[ 6128], 10.00th=[ 7111], 20.00th=[ 7898], 00:09:55.090 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9896], 00:09:55.090 | 70.00th=[10945], 80.00th=[11863], 90.00th=[15401], 95.00th=[18482], 00:09:55.090 | 99.00th=[24773], 99.50th=[25297], 99.90th=[27132], 99.95th=[27132], 00:09:55.090 | 99.99th=[34866] 00:09:55.090 write: IOPS=7031, BW=27.5MiB/s (28.8MB/s)(27.7MiB/1007msec); 0 zone resets 00:09:55.090 slat (nsec): min=1609, max=8311.4k, avg=59844.63, stdev=428839.44 00:09:55.090 clat (usec): min=683, max=23197, avg=8361.29, stdev=3062.42 00:09:55.090 lat (usec): min=690, max=23207, avg=8421.13, stdev=3082.77 00:09:55.090 clat percentiles (usec): 00:09:55.090 | 1.00th=[ 1565], 5.00th=[ 3589], 10.00th=[ 4686], 20.00th=[ 5800], 00:09:55.090 | 30.00th=[ 6915], 40.00th=[ 7701], 50.00th=[ 8225], 60.00th=[ 8979], 00:09:55.090 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11994], 95.00th=[14353], 00:09:55.090 | 99.00th=[17433], 99.50th=[18744], 99.90th=[22152], 99.95th=[23200], 00:09:55.090 | 99.99th=[23200] 00:09:55.090 bw ( KiB/s): min=26896, max=28728, per=26.36%, avg=27812.00, stdev=1295.42, samples=2 00:09:55.090 iops : min= 6724, max= 7182, avg=6953.00, stdev=323.85, samples=2 00:09:55.090 lat (usec) : 750=0.05%, 1000=0.01% 00:09:55.090 lat (msec) : 2=0.67%, 4=3.06%, 10=63.50%, 20=30.71%, 50=2.00% 00:09:55.090 cpu : usr=4.77%, sys=8.45%, ctx=427, majf=0, minf=1 00:09:55.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:55.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.090 issued rwts: total=6656,7081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.090 job3: (groupid=0, jobs=1): err= 0: pid=1789199: Fri Oct 11 11:46:57 2024 00:09:55.090 read: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec) 00:09:55.090 slat (nsec): min=964, max=8061.6k, avg=67855.94, stdev=492955.15 00:09:55.090 clat (usec): min=3342, max=16806, avg=8900.45, stdev=2128.19 00:09:55.090 lat (usec): min=3745, max=19819, avg=8968.30, stdev=2157.60 00:09:55.090 clat percentiles (usec): 00:09:55.090 | 1.00th=[ 4113], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 7373], 00:09:55.090 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 8848], 00:09:55.090 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[11731], 95.00th=[12911], 00:09:55.090 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16712], 99.95th=[16712], 00:09:55.090 | 99.99th=[16909] 00:09:55.090 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.1MiB/1004msec); 0 zone resets 00:09:55.090 slat (nsec): min=1547, max=7417.7k, avg=56991.82, stdev=374253.22 00:09:55.090 clat (usec): min=1106, max=21048, avg=7661.33, stdev=2301.02 00:09:55.090 lat (usec): min=1115, max=21057, avg=7718.33, stdev=2318.80 00:09:55.090 clat percentiles (usec): 00:09:55.090 | 1.00th=[ 3097], 5.00th=[ 4047], 10.00th=[ 4817], 20.00th=[ 5997], 00:09:55.090 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:09:55.090 | 70.00th=[ 8291], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10290], 00:09:55.090 | 99.00th=[18220], 99.50th=[20055], 99.90th=[20579], 99.95th=[20579], 00:09:55.090 | 99.99th=[21103] 00:09:55.090 bw ( KiB/s): min=28776, max=32664, per=29.12%, avg=30720.00, stdev=2749.23, samples=2 00:09:55.090 iops : min= 7194, max= 8166, avg=7680.00, stdev=687.31, samples=2 00:09:55.090 lat (msec) : 2=0.16%, 4=2.78%, 10=81.31%, 20=15.50%, 50=0.25% 00:09:55.090 cpu : usr=4.79%, sys=8.37%, ctx=684, majf=0, minf=2 00:09:55.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:55.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.090 issued rwts: total=7680,7703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.090 00:09:55.090 Run status group 0 (all jobs): 00:09:55.090 READ: bw=98.2MiB/s (103MB/s), 17.9MiB/s-29.9MiB/s (18.8MB/s-31.3MB/s), io=98.9MiB (104MB), run=1002-1007msec 00:09:55.090 WRITE: bw=103MiB/s (108MB/s), 19.9MiB/s-30.0MiB/s (20.8MB/s-31.4MB/s), io=104MiB (109MB), run=1002-1007msec 00:09:55.090 00:09:55.090 Disk stats (read/write): 00:09:55.090 nvme0n1: ios=5140/5304, merge=0/0, ticks=31661/26496, in_queue=58157, util=84.07% 00:09:55.090 nvme0n2: ios=3628/4023, merge=0/0, ticks=26981/28844, in_queue=55825, util=89.81% 00:09:55.090 nvme0n3: ios=5818/6144, merge=0/0, ticks=42296/35237, in_queue=77533, util=92.72% 00:09:55.090 nvme0n4: ios=6201/6655, merge=0/0, ticks=52351/48863, in_queue=101214, util=96.90% 00:09:55.090 11:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:55.090 11:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1789447 00:09:55.090 11:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:55.090 11:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:55.090 [global] 00:09:55.090 thread=1 00:09:55.090 invalidate=1 00:09:55.090 rw=read 00:09:55.090 time_based=1 00:09:55.090 runtime=10 00:09:55.090 ioengine=libaio 00:09:55.090 direct=1 00:09:55.090 bs=4096 00:09:55.090 iodepth=1 00:09:55.090 norandommap=1 00:09:55.090 numjobs=1 00:09:55.090 00:09:55.090 [job0] 00:09:55.090 filename=/dev/nvme0n1 00:09:55.090 [job1] 00:09:55.090 filename=/dev/nvme0n2 00:09:55.090 [job2] 00:09:55.090 filename=/dev/nvme0n3 00:09:55.090 [job3] 00:09:55.090 filename=/dev/nvme0n4 00:09:55.090 Could not set queue depth (nvme0n1) 00:09:55.090 Could not set queue depth (nvme0n2) 00:09:55.090 Could not set queue depth (nvme0n3) 00:09:55.090 Could not set queue depth (nvme0n4) 00:09:55.352 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.352 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.352 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.352 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.352 fio-3.35 00:09:55.352 Starting 4 threads 00:09:57.900 11:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:58.160 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10559488, buflen=4096 00:09:58.160 fio: pid=1789699, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:58.160 11:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:58.421 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4538368, buflen=4096 00:09:58.421 fio: pid=1789691, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:58.421 11:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.421 11:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:58.682 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11706368, buflen=4096 00:09:58.682 fio: pid=1789657, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:58.682 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.682 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:58.682 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7127040, buflen=4096 00:09:58.682 fio: pid=1789671, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:58.682 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.682 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:58.682 00:09:58.682 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1789657: Fri Oct 11 11:47:01 2024 00:09:58.682 read: IOPS=963, BW=3854KiB/s (3947kB/s)(11.2MiB/2966msec) 00:09:58.682 slat (usec): min=6, max=27468, avg=45.08, stdev=610.13 00:09:58.682 clat (usec): min=207, max=2616, avg=978.58, stdev=109.57 00:09:58.682 lat (usec): min=229, max=28464, avg=1020.36, stdev=594.94 00:09:58.682 clat percentiles (usec): 00:09:58.682 | 1.00th=[ 635], 5.00th=[ 791], 10.00th=[ 857], 20.00th=[ 922], 00:09:58.682 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:09:58.682 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:09:58.682 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1696], 99.95th=[ 1893], 00:09:58.682 | 99.99th=[ 2606] 00:09:58.682 bw ( KiB/s): min= 3856, max= 4120, per=37.47%, avg=3940.80, stdev=103.63, samples=5 00:09:58.682 iops : min= 964, max= 1030, avg=985.20, stdev=25.91, samples=5 00:09:58.682 lat (usec) : 250=0.07%, 500=0.24%, 750=2.73%, 1000=51.56% 00:09:58.682 lat (msec) : 2=45.33%, 4=0.03% 00:09:58.682 cpu : usr=1.59%, sys=4.05%, ctx=2863, majf=0, minf=1 00:09:58.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.682 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.682 issued rwts: total=2859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.682 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1789671: Fri Oct 11 11:47:01 2024 00:09:58.682 read: IOPS=552, BW=2209KiB/s (2262kB/s)(6960KiB/3151msec) 00:09:58.682 slat (usec): min=5, max=16193, avg=57.31, stdev=674.32 00:09:58.682 clat (usec): min=220, max=41886, avg=1733.55, stdev=5562.71 00:09:58.682 lat (usec): min=230, max=41913, avg=1781.58, stdev=5586.72 00:09:58.682 clat percentiles (usec): 00:09:58.682 | 1.00th=[ 289], 5.00th=[ 529], 10.00th=[ 693], 20.00th=[ 783], 00:09:58.682 | 30.00th=[ 881], 40.00th=[ 955], 50.00th=[ 996], 60.00th=[ 1029], 00:09:58.682 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1188], 00:09:58.682 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:58.682 | 99.99th=[41681] 00:09:58.682 bw ( KiB/s): min= 368, max= 3848, per=21.19%, avg=2228.50, stdev=1528.45, samples=6 00:09:58.682 iops : min= 92, max= 962, avg=557.00, stdev=382.09, samples=6 00:09:58.682 lat (usec) : 250=0.23%, 500=4.37%, 750=9.94%, 1000=37.22% 00:09:58.682 lat (msec) : 2=45.95%, 4=0.17%, 10=0.06%, 50=2.01% 00:09:58.682 cpu : usr=0.67%, sys=2.35%, ctx=1746, majf=0, minf=2 00:09:58.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.682 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.682 issued rwts: total=1741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.682 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1789691: Fri Oct 11 11:47:01 2024 00:09:58.682 read: IOPS=399, BW=1595KiB/s (1633kB/s)(4432KiB/2779msec) 00:09:58.682 slat (usec): min=6, max=16344, avg=49.15, stdev=621.50 00:09:58.682 clat (usec): min=182, max=42084, avg=2433.72, stdev=7878.12 00:09:58.682 lat (usec): min=209, max=42111, avg=2482.89, stdev=7897.96 00:09:58.682 clat percentiles (usec): 00:09:58.682 | 1.00th=[ 343], 5.00th=[ 519], 10.00th=[ 603], 20.00th=[ 693], 00:09:58.682 | 30.00th=[ 750], 40.00th=[ 816], 50.00th=[ 873], 60.00th=[ 930], 00:09:58.682 | 70.00th=[ 988], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[ 1237], 00:09:58.682 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:58.682 | 99.99th=[42206] 00:09:58.682 bw ( KiB/s): min= 96, max= 2848, per=14.19%, avg=1492.80, stdev=1189.67, samples=5 00:09:58.682 iops : min= 24, max= 712, avg=373.20, stdev=297.42, samples=5 00:09:58.682 lat (usec) : 250=0.54%, 500=3.70%, 750=25.43%, 1000=41.75% 00:09:58.682 lat (msec) : 2=24.62%, 50=3.88% 00:09:58.682 cpu : usr=0.68%, sys=1.30%, ctx=1112, majf=0, minf=2 00:09:58.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.682 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.682 issued rwts: total=1109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.682 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1789699: Fri Oct 11 11:47:01 2024 00:09:58.682 read: IOPS=989, BW=3957KiB/s (4052kB/s)(10.1MiB/2606msec) 00:09:58.682 slat (nsec): min=6394, max=57477, avg=27545.76, stdev=4041.02 00:09:58.683 clat (usec): min=261, max=2349, avg=966.64, stdev=146.71 00:09:58.683 lat (usec): min=288, max=2376, avg=994.18, stdev=147.19 00:09:58.683 clat percentiles (usec): 00:09:58.683 | 1.00th=[ 457], 5.00th=[ 668], 10.00th=[ 807], 20.00th=[ 898], 00:09:58.683 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1012], 00:09:58.683 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:09:58.683 | 99.00th=[ 1205], 99.50th=[ 1254], 99.90th=[ 1827], 99.95th=[ 2212], 00:09:58.683 | 99.99th=[ 2343] 00:09:58.683 bw ( KiB/s): min= 3840, max= 4288, per=38.11%, avg=4008.00, stdev=174.45, samples=5 00:09:58.683 iops : min= 960, max= 1072, avg=1002.00, stdev=43.61, samples=5 00:09:58.683 lat (usec) : 500=1.78%, 750=5.31%, 1000=46.45% 00:09:58.683 lat (msec) : 2=46.34%, 4=0.08% 00:09:58.683 cpu : usr=1.77%, sys=4.11%, ctx=2579, majf=0, minf=2 00:09:58.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.683 issued rwts: total=2579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.683 00:09:58.683 Run status group 0 (all jobs): 00:09:58.683 READ: bw=10.3MiB/s (10.8MB/s), 1595KiB/s-3957KiB/s (1633kB/s-4052kB/s), io=32.4MiB (33.9MB), run=2606-3151msec 00:09:58.683 00:09:58.683 Disk stats (read/write): 00:09:58.683 nvme0n1: ios=2760/0, merge=0/0, ticks=2494/0, in_queue=2494, util=93.36% 00:09:58.683 nvme0n2: ios=1722/0, merge=0/0, ticks=2802/0, in_queue=2802, util=94.80% 00:09:58.683 nvme0n3: ios=960/0, merge=0/0, ticks=2523/0, in_queue=2523, util=96.03% 00:09:58.683 nvme0n4: ios=2578/0, merge=0/0, ticks=2343/0, in_queue=2343, util=96.42% 00:09:58.944 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:58.944 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:59.205 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.205 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:59.205 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.205 11:47:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:59.465 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.465 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1789447 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:59.726 nvmf hotplug test: fio failed as expected 00:09:59.726 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.986 rmmod nvme_tcp 00:09:59.986 rmmod nvme_fabrics 00:09:59.986 rmmod nvme_keyring 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1785921 ']' 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1785921 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1785921 ']' 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1785921 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.986 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1785921 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1785921' 00:10:00.247 killing process with pid 1785921 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1785921 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1785921 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.247 11:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.795 11:47:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:02.795 00:10:02.795 real 0m29.598s 00:10:02.795 user 2m37.017s 00:10:02.795 sys 0m9.916s 00:10:02.795 11:47:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.795 11:47:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.795 ************************************ 00:10:02.795 END TEST nvmf_fio_target 00:10:02.795 ************************************ 00:10:02.795 11:47:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:02.795 11:47:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:02.795 11:47:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.795 11:47:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:02.795 ************************************ 00:10:02.795 START TEST nvmf_bdevio 00:10:02.795 ************************************ 00:10:02.795 11:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:02.795 * Looking for test storage... 00:10:02.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.795 --rc genhtml_branch_coverage=1 00:10:02.795 --rc genhtml_function_coverage=1 00:10:02.795 --rc genhtml_legend=1 00:10:02.795 --rc geninfo_all_blocks=1 00:10:02.795 --rc geninfo_unexecuted_blocks=1 00:10:02.795 00:10:02.795 ' 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.795 --rc genhtml_branch_coverage=1 00:10:02.795 --rc genhtml_function_coverage=1 00:10:02.795 --rc genhtml_legend=1 00:10:02.795 --rc geninfo_all_blocks=1 00:10:02.795 --rc geninfo_unexecuted_blocks=1 00:10:02.795 00:10:02.795 ' 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.795 --rc genhtml_branch_coverage=1 00:10:02.795 --rc genhtml_function_coverage=1 00:10:02.795 --rc genhtml_legend=1 00:10:02.795 --rc geninfo_all_blocks=1 00:10:02.795 --rc geninfo_unexecuted_blocks=1 00:10:02.795 00:10:02.795 ' 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.795 --rc genhtml_branch_coverage=1 00:10:02.795 --rc genhtml_function_coverage=1 00:10:02.795 --rc genhtml_legend=1 00:10:02.795 --rc geninfo_all_blocks=1 00:10:02.795 --rc geninfo_unexecuted_blocks=1 00:10:02.795 00:10:02.795 ' 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.795 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:02.796 11:47:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.940 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:10.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:10.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:10.941 Found net devices under 0000:31:00.0: cvl_0_0 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:10.941 Found net devices under 0000:31:00.1: cvl_0_1 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:10:10.941 00:10:10.941 --- 10.0.0.2 ping statistics --- 00:10:10.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.941 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:10:10.941 00:10:10.941 --- 10.0.0.1 ping statistics --- 00:10:10.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.941 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1795067 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1795067 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1795067 ']' 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.941 11:47:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.941 [2024-10-11 11:47:12.940819] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:10.941 [2024-10-11 11:47:12.940884] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.941 [2024-10-11 11:47:13.030866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.942 [2024-10-11 11:47:13.081377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.942 [2024-10-11 11:47:13.081420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.942 [2024-10-11 11:47:13.081429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.942 [2024-10-11 11:47:13.081437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.942 [2024-10-11 11:47:13.081443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.942 [2024-10-11 11:47:13.083843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.942 [2024-10-11 11:47:13.084003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:10.942 [2024-10-11 11:47:13.084114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.942 [2024-10-11 11:47:13.084116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 [2024-10-11 11:47:13.810228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 Malloc0 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 [2024-10-11 11:47:13.892655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:11.203 { 00:10:11.203 "params": { 00:10:11.203 "name": "Nvme$subsystem", 00:10:11.203 "trtype": "$TEST_TRANSPORT", 00:10:11.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.203 "adrfam": "ipv4", 00:10:11.203 "trsvcid": "$NVMF_PORT", 00:10:11.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.203 "hdgst": ${hdgst:-false}, 00:10:11.203 "ddgst": ${ddgst:-false} 00:10:11.203 }, 00:10:11.203 "method": "bdev_nvme_attach_controller" 00:10:11.203 } 00:10:11.203 EOF 00:10:11.203 )") 00:10:11.203 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:11.465 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:11.465 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:11.465 11:47:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:11.465 "params": { 00:10:11.465 "name": "Nvme1", 00:10:11.465 "trtype": "tcp", 00:10:11.465 "traddr": "10.0.0.2", 00:10:11.465 "adrfam": "ipv4", 00:10:11.465 "trsvcid": "4420", 00:10:11.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.465 "hdgst": false, 00:10:11.465 "ddgst": false 00:10:11.465 }, 00:10:11.465 "method": "bdev_nvme_attach_controller" 00:10:11.465 }' 00:10:11.465 [2024-10-11 11:47:13.952108] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:11.465 [2024-10-11 11:47:13.952171] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795211 ] 00:10:11.465 [2024-10-11 11:47:14.034811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.465 [2024-10-11 11:47:14.092173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.465 [2024-10-11 11:47:14.092225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.465 [2024-10-11 11:47:14.092225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.726 I/O targets: 00:10:11.726 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:11.726 00:10:11.726 00:10:11.726 CUnit - A unit testing framework for C - Version 2.1-3 00:10:11.726 http://cunit.sourceforge.net/ 00:10:11.726 00:10:11.726 00:10:11.726 Suite: bdevio tests on: Nvme1n1 00:10:11.987 Test: blockdev write read block ...passed 00:10:11.987 Test: blockdev write zeroes read block ...passed 00:10:11.987 Test: blockdev write zeroes read no split ...passed 00:10:11.987 Test: blockdev write zeroes read split ...passed 00:10:11.987 Test: blockdev write zeroes read split partial ...passed 00:10:11.987 Test: blockdev reset ...[2024-10-11 11:47:14.508230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:11.987 [2024-10-11 11:47:14.508317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f0440 (9): Bad file descriptor 00:10:11.987 [2024-10-11 11:47:14.603263] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:11.987 passed 00:10:11.987 Test: blockdev write read 8 blocks ...passed 00:10:11.987 Test: blockdev write read size > 128k ...passed 00:10:11.987 Test: blockdev write read invalid size ...passed 00:10:11.987 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:11.987 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:11.987 Test: blockdev write read max offset ...passed 00:10:12.248 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:12.248 Test: blockdev writev readv 8 blocks ...passed 00:10:12.248 Test: blockdev writev readv 30 x 1block ...passed 00:10:12.248 Test: blockdev writev readv block ...passed 00:10:12.248 Test: blockdev writev readv size > 128k ...passed 00:10:12.248 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:12.248 Test: blockdev comparev and writev ...[2024-10-11 11:47:14.786605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.248 [2024-10-11 11:47:14.786640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.786657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.248 [2024-10-11 11:47:14.786666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.787113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.248 [2024-10-11 11:47:14.787125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.787139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.248 [2024-10-11 11:47:14.787148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.787648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.248 [2024-10-11 11:47:14.787659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.787673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.248 [2024-10-11 11:47:14.787685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.788134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.248 [2024-10-11 11:47:14.788146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.788160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.248 [2024-10-11 11:47:14.788168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:12.248 passed 00:10:12.248 Test: blockdev nvme passthru rw ...passed 00:10:12.248 Test: blockdev nvme passthru vendor specific ...[2024-10-11 11:47:14.872942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.248 [2024-10-11 11:47:14.872957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.873277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.248 [2024-10-11 11:47:14.873289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.873627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.248 [2024-10-11 11:47:14.873637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:12.248 [2024-10-11 11:47:14.873965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.248 [2024-10-11 11:47:14.873975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:12.248 passed 00:10:12.248 Test: blockdev nvme admin passthru ...passed 00:10:12.248 Test: blockdev copy ...passed 00:10:12.248 00:10:12.248 Run Summary: Type Total Ran Passed Failed Inactive 00:10:12.248 suites 1 1 n/a 0 0 00:10:12.248 tests 23 23 23 0 0 00:10:12.248 asserts 152 152 152 0 n/a 00:10:12.248 00:10:12.248 Elapsed time = 1.101 seconds 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.509 rmmod nvme_tcp 00:10:12.509 rmmod nvme_fabrics 00:10:12.509 rmmod nvme_keyring 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1795067 ']' 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1795067 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1795067 ']' 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1795067 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1795067 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1795067' 00:10:12.509 killing process with pid 1795067 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1795067 00:10:12.509 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1795067 00:10:12.769 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:12.769 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:12.769 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:12.769 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:12.769 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:12.769 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:12.770 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:12.770 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.770 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.770 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.770 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.770 11:47:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.316 00:10:15.316 real 0m12.435s 00:10:15.316 user 0m13.513s 00:10:15.316 sys 0m6.324s 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.316 ************************************ 00:10:15.316 END TEST nvmf_bdevio 00:10:15.316 ************************************ 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:15.316 00:10:15.316 real 5m7.437s 00:10:15.316 user 11m49.359s 00:10:15.316 sys 1m52.258s 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.316 ************************************ 00:10:15.316 END TEST nvmf_target_core 00:10:15.316 ************************************ 00:10:15.316 11:47:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:15.316 11:47:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:15.316 11:47:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.316 11:47:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:15.316 ************************************ 00:10:15.316 START TEST nvmf_target_extra 00:10:15.316 ************************************ 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:15.316 * Looking for test storage... 00:10:15.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:15.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.316 --rc genhtml_branch_coverage=1 00:10:15.316 --rc genhtml_function_coverage=1 00:10:15.316 --rc genhtml_legend=1 00:10:15.316 --rc geninfo_all_blocks=1 00:10:15.316 --rc geninfo_unexecuted_blocks=1 00:10:15.316 00:10:15.316 ' 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:15.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.316 --rc genhtml_branch_coverage=1 00:10:15.316 --rc genhtml_function_coverage=1 00:10:15.316 --rc genhtml_legend=1 00:10:15.316 --rc geninfo_all_blocks=1 00:10:15.316 --rc geninfo_unexecuted_blocks=1 00:10:15.316 00:10:15.316 ' 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:15.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.316 --rc genhtml_branch_coverage=1 00:10:15.316 --rc genhtml_function_coverage=1 00:10:15.316 --rc genhtml_legend=1 00:10:15.316 --rc geninfo_all_blocks=1 00:10:15.316 --rc geninfo_unexecuted_blocks=1 00:10:15.316 00:10:15.316 ' 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:15.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.316 --rc genhtml_branch_coverage=1 00:10:15.316 --rc genhtml_function_coverage=1 00:10:15.316 --rc genhtml_legend=1 00:10:15.316 --rc geninfo_all_blocks=1 00:10:15.316 --rc geninfo_unexecuted_blocks=1 00:10:15.316 00:10:15.316 ' 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.316 11:47:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:15.317 ************************************ 00:10:15.317 START TEST nvmf_example 00:10:15.317 ************************************ 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:15.317 * Looking for test storage... 00:10:15.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:15.317 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.317 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:15.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.317 --rc genhtml_branch_coverage=1 00:10:15.317 --rc genhtml_function_coverage=1 00:10:15.317 --rc genhtml_legend=1 00:10:15.317 --rc geninfo_all_blocks=1 00:10:15.317 --rc geninfo_unexecuted_blocks=1 00:10:15.317 00:10:15.317 ' 00:10:15.578 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:15.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.578 --rc genhtml_branch_coverage=1 00:10:15.578 --rc genhtml_function_coverage=1 00:10:15.578 --rc genhtml_legend=1 00:10:15.578 --rc geninfo_all_blocks=1 00:10:15.578 --rc geninfo_unexecuted_blocks=1 00:10:15.578 00:10:15.578 ' 00:10:15.578 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:15.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.578 --rc genhtml_branch_coverage=1 00:10:15.578 --rc genhtml_function_coverage=1 00:10:15.578 --rc genhtml_legend=1 00:10:15.578 --rc geninfo_all_blocks=1 00:10:15.578 --rc geninfo_unexecuted_blocks=1 00:10:15.578 00:10:15.578 ' 00:10:15.578 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:15.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.578 --rc genhtml_branch_coverage=1 00:10:15.578 --rc genhtml_function_coverage=1 00:10:15.578 --rc genhtml_legend=1 00:10:15.578 --rc geninfo_all_blocks=1 00:10:15.578 --rc geninfo_unexecuted_blocks=1 00:10:15.578 00:10:15.578 ' 00:10:15.578 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.578 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:15.578 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.578 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.578 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.579 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:23.718 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.718 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:23.719 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:23.719 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:23.719 Found net devices under 0000:31:00.0: cvl_0_0 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:23.719 Found net devices under 0000:31:00.1: cvl_0_1 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:23.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:10:23.719 00:10:23.719 --- 10.0.0.2 ping statistics --- 00:10:23.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.719 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:10:23.719 00:10:23.719 --- 10.0.0.1 ping statistics --- 00:10:23.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.719 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.719 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1799893 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1799893 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1799893 ']' 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.720 11:47:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:24.292 11:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:36.524 Initializing NVMe Controllers 00:10:36.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:36.524 Initialization complete. Launching workers. 00:10:36.524 ======================================================== 00:10:36.524 Latency(us) 00:10:36.524 Device Information : IOPS MiB/s Average min max 00:10:36.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18714.42 73.10 3420.76 640.10 15627.95 00:10:36.524 ======================================================== 00:10:36.524 Total : 18714.42 73.10 3420.76 640.10 15627.95 00:10:36.524 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.524 rmmod nvme_tcp 00:10:36.524 rmmod nvme_fabrics 00:10:36.524 rmmod nvme_keyring 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1799893 ']' 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1799893 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1799893 ']' 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1799893 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1799893 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1799893' 00:10:36.524 killing process with pid 1799893 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1799893 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1799893 00:10:36.524 nvmf threads initialize successfully 00:10:36.524 bdev subsystem init successfully 00:10:36.524 created a nvmf target service 00:10:36.524 create targets's poll groups done 00:10:36.524 all subsystems of target started 00:10:36.524 nvmf target is running 00:10:36.524 all subsystems of target stopped 00:10:36.524 destroy targets's poll groups done 00:10:36.524 destroyed the nvmf target service 00:10:36.524 bdev subsystem finish successfully 00:10:36.524 nvmf threads destroy successfully 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.524 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.785 00:10:36.785 real 0m21.617s 00:10:36.785 user 0m46.588s 00:10:36.785 sys 0m7.129s 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.785 ************************************ 00:10:36.785 END TEST nvmf_example 00:10:36.785 ************************************ 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.785 11:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.048 ************************************ 00:10:37.048 START TEST nvmf_filesystem 00:10:37.048 ************************************ 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:37.048 * Looking for test storage... 00:10:37.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.048 --rc genhtml_branch_coverage=1 00:10:37.048 --rc genhtml_function_coverage=1 00:10:37.048 --rc genhtml_legend=1 00:10:37.048 --rc geninfo_all_blocks=1 00:10:37.048 --rc geninfo_unexecuted_blocks=1 00:10:37.048 00:10:37.048 ' 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.048 --rc genhtml_branch_coverage=1 00:10:37.048 --rc genhtml_function_coverage=1 00:10:37.048 --rc genhtml_legend=1 00:10:37.048 --rc geninfo_all_blocks=1 00:10:37.048 --rc geninfo_unexecuted_blocks=1 00:10:37.048 00:10:37.048 ' 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.048 --rc genhtml_branch_coverage=1 00:10:37.048 --rc genhtml_function_coverage=1 00:10:37.048 --rc genhtml_legend=1 00:10:37.048 --rc geninfo_all_blocks=1 00:10:37.048 --rc geninfo_unexecuted_blocks=1 00:10:37.048 00:10:37.048 ' 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.048 --rc genhtml_branch_coverage=1 00:10:37.048 --rc genhtml_function_coverage=1 00:10:37.048 --rc genhtml_legend=1 00:10:37.048 --rc geninfo_all_blocks=1 00:10:37.048 --rc geninfo_unexecuted_blocks=1 00:10:37.048 00:10:37.048 ' 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:37.048 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:37.049 #define SPDK_CONFIG_H 00:10:37.049 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:37.049 #define SPDK_CONFIG_APPS 1 00:10:37.049 #define SPDK_CONFIG_ARCH native 00:10:37.049 #undef SPDK_CONFIG_ASAN 00:10:37.049 #undef SPDK_CONFIG_AVAHI 00:10:37.049 #undef SPDK_CONFIG_CET 00:10:37.049 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:37.049 #define SPDK_CONFIG_COVERAGE 1 00:10:37.049 #define SPDK_CONFIG_CROSS_PREFIX 00:10:37.049 #undef SPDK_CONFIG_CRYPTO 00:10:37.049 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:37.049 #undef SPDK_CONFIG_CUSTOMOCF 00:10:37.049 #undef SPDK_CONFIG_DAOS 00:10:37.049 #define SPDK_CONFIG_DAOS_DIR 00:10:37.049 #define SPDK_CONFIG_DEBUG 1 00:10:37.049 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:37.049 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:37.049 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:37.049 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:37.049 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:37.049 #undef SPDK_CONFIG_DPDK_UADK 00:10:37.049 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:37.049 #define SPDK_CONFIG_EXAMPLES 1 00:10:37.049 #undef SPDK_CONFIG_FC 00:10:37.049 #define SPDK_CONFIG_FC_PATH 00:10:37.049 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:37.049 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:37.049 #define SPDK_CONFIG_FSDEV 1 00:10:37.049 #undef SPDK_CONFIG_FUSE 00:10:37.049 #undef SPDK_CONFIG_FUZZER 00:10:37.049 #define SPDK_CONFIG_FUZZER_LIB 00:10:37.049 #undef SPDK_CONFIG_GOLANG 00:10:37.049 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:37.049 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:37.049 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:37.049 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:37.049 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:37.049 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:37.049 #undef SPDK_CONFIG_HAVE_LZ4 00:10:37.049 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:37.049 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:37.049 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:37.049 #define SPDK_CONFIG_IDXD 1 00:10:37.049 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:37.049 #undef SPDK_CONFIG_IPSEC_MB 00:10:37.049 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:37.049 #define SPDK_CONFIG_ISAL 1 00:10:37.049 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:37.049 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:37.049 #define SPDK_CONFIG_LIBDIR 00:10:37.049 #undef SPDK_CONFIG_LTO 00:10:37.049 #define SPDK_CONFIG_MAX_LCORES 128 00:10:37.049 #define SPDK_CONFIG_NVME_CUSE 1 00:10:37.049 #undef SPDK_CONFIG_OCF 00:10:37.049 #define SPDK_CONFIG_OCF_PATH 00:10:37.049 #define SPDK_CONFIG_OPENSSL_PATH 00:10:37.049 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:37.049 #define SPDK_CONFIG_PGO_DIR 00:10:37.049 #undef SPDK_CONFIG_PGO_USE 00:10:37.049 #define SPDK_CONFIG_PREFIX /usr/local 00:10:37.049 #undef SPDK_CONFIG_RAID5F 00:10:37.049 #undef SPDK_CONFIG_RBD 00:10:37.049 #define SPDK_CONFIG_RDMA 1 00:10:37.049 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:37.049 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:37.049 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:37.049 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:37.049 #define SPDK_CONFIG_SHARED 1 00:10:37.049 #undef SPDK_CONFIG_SMA 00:10:37.049 #define SPDK_CONFIG_TESTS 1 00:10:37.049 #undef SPDK_CONFIG_TSAN 00:10:37.049 #define SPDK_CONFIG_UBLK 1 00:10:37.049 #define SPDK_CONFIG_UBSAN 1 00:10:37.049 #undef SPDK_CONFIG_UNIT_TESTS 00:10:37.049 #undef SPDK_CONFIG_URING 00:10:37.049 #define SPDK_CONFIG_URING_PATH 00:10:37.049 #undef SPDK_CONFIG_URING_ZNS 00:10:37.049 #undef SPDK_CONFIG_USDT 00:10:37.049 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:37.049 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:37.049 #define SPDK_CONFIG_VFIO_USER 1 00:10:37.049 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:37.049 #define SPDK_CONFIG_VHOST 1 00:10:37.049 #define SPDK_CONFIG_VIRTIO 1 00:10:37.049 #undef SPDK_CONFIG_VTUNE 00:10:37.049 #define SPDK_CONFIG_VTUNE_DIR 00:10:37.049 #define SPDK_CONFIG_WERROR 1 00:10:37.049 #define SPDK_CONFIG_WPDK_DIR 00:10:37.049 #undef SPDK_CONFIG_XNVME 00:10:37.049 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.049 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:37.050 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.050 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:37.050 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:37.314 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:37.315 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1802690 ]] 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1802690 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.17TM6D 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.17TM6D/tests/target /tmp/spdk.17TM6D 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=156295168 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5128134656 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123389046784 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356537856 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5967491072 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668237824 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847894016 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23416832 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677924864 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=344064 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:37.316 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:37.317 * Looking for test storage... 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123389046784 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8182083584 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:37.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.317 --rc genhtml_branch_coverage=1 00:10:37.317 --rc genhtml_function_coverage=1 00:10:37.317 --rc genhtml_legend=1 00:10:37.317 --rc geninfo_all_blocks=1 00:10:37.317 --rc geninfo_unexecuted_blocks=1 00:10:37.317 00:10:37.317 ' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:37.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.317 --rc genhtml_branch_coverage=1 00:10:37.317 --rc genhtml_function_coverage=1 00:10:37.317 --rc genhtml_legend=1 00:10:37.317 --rc geninfo_all_blocks=1 00:10:37.317 --rc geninfo_unexecuted_blocks=1 00:10:37.317 00:10:37.317 ' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:37.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.317 --rc genhtml_branch_coverage=1 00:10:37.317 --rc genhtml_function_coverage=1 00:10:37.317 --rc genhtml_legend=1 00:10:37.317 --rc geninfo_all_blocks=1 00:10:37.317 --rc geninfo_unexecuted_blocks=1 00:10:37.317 00:10:37.317 ' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:37.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.317 --rc genhtml_branch_coverage=1 00:10:37.317 --rc genhtml_function_coverage=1 00:10:37.317 --rc genhtml_legend=1 00:10:37.317 --rc geninfo_all_blocks=1 00:10:37.317 --rc geninfo_unexecuted_blocks=1 00:10:37.317 00:10:37.317 ' 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:37.317 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.318 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.318 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.579 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:37.579 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:37.579 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.579 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:45.717 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:45.717 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:45.717 Found net devices under 0000:31:00.0: cvl_0_0 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:45.717 Found net devices under 0000:31:00.1: cvl_0_1 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.717 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:10:45.718 00:10:45.718 --- 10.0.0.2 ping statistics --- 00:10:45.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.718 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:10:45.718 00:10:45.718 --- 10.0.0.1 ping statistics --- 00:10:45.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.718 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.718 ************************************ 00:10:45.718 START TEST nvmf_filesystem_no_in_capsule 00:10:45.718 ************************************ 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1806706 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1806706 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1806706 ']' 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.718 11:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.718 [2024-10-11 11:47:47.911076] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:45.718 [2024-10-11 11:47:47.911136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.718 [2024-10-11 11:47:48.002676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.718 [2024-10-11 11:47:48.058764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.718 [2024-10-11 11:47:48.058817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.718 [2024-10-11 11:47:48.058826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.718 [2024-10-11 11:47:48.058833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.718 [2024-10-11 11:47:48.058839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.718 [2024-10-11 11:47:48.061268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.718 [2024-10-11 11:47:48.061428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.718 [2024-10-11 11:47:48.061589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.718 [2024-10-11 11:47:48.061589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.289 [2024-10-11 11:47:48.788148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.289 Malloc1 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.289 [2024-10-11 11:47:48.941737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:46.289 { 00:10:46.289 "name": "Malloc1", 00:10:46.289 "aliases": [ 00:10:46.289 "0e640407-4cb3-4c17-9ab7-5ec9b551abc9" 00:10:46.289 ], 00:10:46.289 "product_name": "Malloc disk", 00:10:46.289 "block_size": 512, 00:10:46.289 "num_blocks": 1048576, 00:10:46.289 "uuid": "0e640407-4cb3-4c17-9ab7-5ec9b551abc9", 00:10:46.289 "assigned_rate_limits": { 00:10:46.289 "rw_ios_per_sec": 0, 00:10:46.289 "rw_mbytes_per_sec": 0, 00:10:46.289 "r_mbytes_per_sec": 0, 00:10:46.289 "w_mbytes_per_sec": 0 00:10:46.289 }, 00:10:46.289 "claimed": true, 00:10:46.289 "claim_type": "exclusive_write", 00:10:46.289 "zoned": false, 00:10:46.289 "supported_io_types": { 00:10:46.289 "read": true, 00:10:46.289 "write": true, 00:10:46.289 "unmap": true, 00:10:46.289 "flush": true, 00:10:46.289 "reset": true, 00:10:46.289 "nvme_admin": false, 00:10:46.289 "nvme_io": false, 00:10:46.289 "nvme_io_md": false, 00:10:46.289 "write_zeroes": true, 00:10:46.289 "zcopy": true, 00:10:46.289 "get_zone_info": false, 00:10:46.289 "zone_management": false, 00:10:46.289 "zone_append": false, 00:10:46.289 "compare": false, 00:10:46.289 "compare_and_write": false, 00:10:46.289 "abort": true, 00:10:46.289 "seek_hole": false, 00:10:46.289 "seek_data": false, 00:10:46.289 "copy": true, 00:10:46.289 "nvme_iov_md": false 00:10:46.289 }, 00:10:46.289 "memory_domains": [ 00:10:46.289 { 00:10:46.289 "dma_device_id": "system", 00:10:46.289 "dma_device_type": 1 00:10:46.289 }, 00:10:46.289 { 00:10:46.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.289 "dma_device_type": 2 00:10:46.289 } 00:10:46.289 ], 00:10:46.289 "driver_specific": {} 00:10:46.289 } 00:10:46.289 ]' 00:10:46.289 11:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:46.551 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:46.551 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:46.551 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:46.551 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:46.551 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:46.551 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:46.551 11:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.934 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.195 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:48.195 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.195 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:48.195 11:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.108 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:50.368 11:47:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:51.309 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:51.309 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:51.309 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:51.309 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.309 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.309 ************************************ 00:10:51.309 START TEST filesystem_ext4 00:10:51.309 ************************************ 00:10:51.309 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:51.310 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:51.310 mke2fs 1.47.0 (5-Feb-2023) 00:10:51.310 Discarding device blocks: 0/522240 done 00:10:51.310 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:51.310 Filesystem UUID: 67deb54e-f791-47a3-8674-1801355088f1 00:10:51.310 Superblock backups stored on blocks: 00:10:51.310 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:51.310 00:10:51.310 Allocating group tables: 0/64 done 00:10:51.310 Writing inode tables: 0/64 done 00:10:51.570 Creating journal (8192 blocks): done 00:10:51.570 Writing superblocks and filesystem accounting information: 0/64 done 00:10:51.570 00:10:51.570 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:51.570 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1806706 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:58.253 00:10:58.253 real 0m6.327s 00:10:58.253 user 0m0.026s 00:10:58.253 sys 0m0.081s 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:58.253 ************************************ 00:10:58.253 END TEST filesystem_ext4 00:10:58.253 ************************************ 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.253 ************************************ 00:10:58.253 START TEST filesystem_btrfs 00:10:58.253 ************************************ 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:58.253 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:58.254 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:58.254 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:58.254 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:58.254 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:58.254 btrfs-progs v6.8.1 00:10:58.254 See https://btrfs.readthedocs.io for more information. 00:10:58.254 00:10:58.254 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:58.254 NOTE: several default settings have changed in version 5.15, please make sure 00:10:58.254 this does not affect your deployments: 00:10:58.254 - DUP for metadata (-m dup) 00:10:58.254 - enabled no-holes (-O no-holes) 00:10:58.254 - enabled free-space-tree (-R free-space-tree) 00:10:58.254 00:10:58.254 Label: (null) 00:10:58.254 UUID: 80d5c793-dced-4185-88dc-530081ec9387 00:10:58.254 Node size: 16384 00:10:58.254 Sector size: 4096 (CPU page size: 4096) 00:10:58.254 Filesystem size: 510.00MiB 00:10:58.254 Block group profiles: 00:10:58.254 Data: single 8.00MiB 00:10:58.254 Metadata: DUP 32.00MiB 00:10:58.254 System: DUP 8.00MiB 00:10:58.254 SSD detected: yes 00:10:58.254 Zoned device: no 00:10:58.254 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:58.254 Checksum: crc32c 00:10:58.254 Number of devices: 1 00:10:58.254 Devices: 00:10:58.254 ID SIZE PATH 00:10:58.254 1 510.00MiB /dev/nvme0n1p1 00:10:58.254 00:10:58.254 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:58.254 11:48:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:58.515 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:58.515 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:58.515 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:58.515 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1806706 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:58.775 00:10:58.775 real 0m0.978s 00:10:58.775 user 0m0.034s 00:10:58.775 sys 0m0.118s 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:58.775 ************************************ 00:10:58.775 END TEST filesystem_btrfs 00:10:58.775 ************************************ 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.775 ************************************ 00:10:58.775 START TEST filesystem_xfs 00:10:58.775 ************************************ 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:58.775 11:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:58.775 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:58.775 = sectsz=512 attr=2, projid32bit=1 00:10:58.775 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:58.775 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:58.775 data = bsize=4096 blocks=130560, imaxpct=25 00:10:58.775 = sunit=0 swidth=0 blks 00:10:58.775 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:58.775 log =internal log bsize=4096 blocks=16384, version=2 00:10:58.775 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:58.775 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:00.156 Discarding blocks...Done. 00:11:00.156 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:00.156 11:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1806706 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:02.082 00:11:02.082 real 0m3.085s 00:11:02.082 user 0m0.027s 00:11:02.082 sys 0m0.079s 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:02.082 ************************************ 00:11:02.082 END TEST filesystem_xfs 00:11:02.082 ************************************ 00:11:02.082 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1806706 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1806706 ']' 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1806706 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.083 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1806706 00:11:02.344 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.344 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.344 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1806706' 00:11:02.344 killing process with pid 1806706 00:11:02.344 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1806706 00:11:02.344 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1806706 00:11:02.344 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:02.344 00:11:02.344 real 0m17.151s 00:11:02.344 user 1m7.663s 00:11:02.344 sys 0m1.479s 00:11:02.344 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.344 11:48:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.344 ************************************ 00:11:02.344 END TEST nvmf_filesystem_no_in_capsule 00:11:02.344 ************************************ 00:11:02.344 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:02.344 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.344 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.344 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.604 ************************************ 00:11:02.604 START TEST nvmf_filesystem_in_capsule 00:11:02.604 ************************************ 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1810399 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1810399 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1810399 ']' 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.604 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.604 [2024-10-11 11:48:05.150086] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:02.604 [2024-10-11 11:48:05.150160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.604 [2024-10-11 11:48:05.238058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.604 [2024-10-11 11:48:05.271048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.604 [2024-10-11 11:48:05.271079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.604 [2024-10-11 11:48:05.271085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.604 [2024-10-11 11:48:05.271090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.604 [2024-10-11 11:48:05.271094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.604 [2024-10-11 11:48:05.272441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.604 [2024-10-11 11:48:05.272593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.604 [2024-10-11 11:48:05.272720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.604 [2024-10-11 11:48:05.272722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.547 [2024-10-11 11:48:05.990560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.547 11:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.547 Malloc1 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.547 [2024-10-11 11:48:06.113258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:03.547 { 00:11:03.547 "name": "Malloc1", 00:11:03.547 "aliases": [ 00:11:03.547 "cd0f5ac8-8dd3-4ff2-8dc8-28e9e66b3758" 00:11:03.547 ], 00:11:03.547 "product_name": "Malloc disk", 00:11:03.547 "block_size": 512, 00:11:03.547 "num_blocks": 1048576, 00:11:03.547 "uuid": "cd0f5ac8-8dd3-4ff2-8dc8-28e9e66b3758", 00:11:03.547 "assigned_rate_limits": { 00:11:03.547 "rw_ios_per_sec": 0, 00:11:03.547 "rw_mbytes_per_sec": 0, 00:11:03.547 "r_mbytes_per_sec": 0, 00:11:03.547 "w_mbytes_per_sec": 0 00:11:03.547 }, 00:11:03.547 "claimed": true, 00:11:03.547 "claim_type": "exclusive_write", 00:11:03.547 "zoned": false, 00:11:03.547 "supported_io_types": { 00:11:03.547 "read": true, 00:11:03.547 "write": true, 00:11:03.547 "unmap": true, 00:11:03.547 "flush": true, 00:11:03.547 "reset": true, 00:11:03.547 "nvme_admin": false, 00:11:03.547 "nvme_io": false, 00:11:03.547 "nvme_io_md": false, 00:11:03.547 "write_zeroes": true, 00:11:03.547 "zcopy": true, 00:11:03.547 "get_zone_info": false, 00:11:03.547 "zone_management": false, 00:11:03.547 "zone_append": false, 00:11:03.547 "compare": false, 00:11:03.547 "compare_and_write": false, 00:11:03.547 "abort": true, 00:11:03.547 "seek_hole": false, 00:11:03.547 "seek_data": false, 00:11:03.547 "copy": true, 00:11:03.547 "nvme_iov_md": false 00:11:03.547 }, 00:11:03.547 "memory_domains": [ 00:11:03.547 { 00:11:03.547 "dma_device_id": "system", 00:11:03.547 "dma_device_type": 1 00:11:03.547 }, 00:11:03.547 { 00:11:03.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.547 "dma_device_type": 2 00:11:03.547 } 00:11:03.547 ], 00:11:03.547 "driver_specific": {} 00:11:03.547 } 00:11:03.547 ]' 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:03.547 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.457 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.458 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:05.458 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.458 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:05.458 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:07.367 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:07.627 11:48:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.569 ************************************ 00:11:08.569 START TEST filesystem_in_capsule_ext4 00:11:08.569 ************************************ 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:08.569 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:08.569 mke2fs 1.47.0 (5-Feb-2023) 00:11:08.569 Discarding device blocks: 0/522240 done 00:11:08.569 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:08.569 Filesystem UUID: 430b935d-974a-49a0-9264-2cf10164cbde 00:11:08.569 Superblock backups stored on blocks: 00:11:08.569 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:08.569 00:11:08.569 Allocating group tables: 0/64 done 00:11:08.569 Writing inode tables: 0/64 done 00:11:09.512 Creating journal (8192 blocks): done 00:11:11.726 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:11.726 00:11:11.726 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:11.726 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1810399 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.307 00:11:18.307 real 0m9.493s 00:11:18.307 user 0m0.024s 00:11:18.307 sys 0m0.084s 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:18.307 ************************************ 00:11:18.307 END TEST filesystem_in_capsule_ext4 00:11:18.307 ************************************ 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.307 ************************************ 00:11:18.307 START TEST filesystem_in_capsule_btrfs 00:11:18.307 ************************************ 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:18.307 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:18.567 btrfs-progs v6.8.1 00:11:18.567 See https://btrfs.readthedocs.io for more information. 00:11:18.567 00:11:18.567 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:18.567 NOTE: several default settings have changed in version 5.15, please make sure 00:11:18.567 this does not affect your deployments: 00:11:18.567 - DUP for metadata (-m dup) 00:11:18.567 - enabled no-holes (-O no-holes) 00:11:18.567 - enabled free-space-tree (-R free-space-tree) 00:11:18.567 00:11:18.567 Label: (null) 00:11:18.567 UUID: 3c51cd11-96f0-4dcb-a302-6bb99effda05 00:11:18.567 Node size: 16384 00:11:18.567 Sector size: 4096 (CPU page size: 4096) 00:11:18.567 Filesystem size: 510.00MiB 00:11:18.567 Block group profiles: 00:11:18.567 Data: single 8.00MiB 00:11:18.567 Metadata: DUP 32.00MiB 00:11:18.567 System: DUP 8.00MiB 00:11:18.567 SSD detected: yes 00:11:18.567 Zoned device: no 00:11:18.567 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:18.567 Checksum: crc32c 00:11:18.567 Number of devices: 1 00:11:18.567 Devices: 00:11:18.567 ID SIZE PATH 00:11:18.567 1 510.00MiB /dev/nvme0n1p1 00:11:18.567 00:11:18.567 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:18.567 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:19.508 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1810399 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:19.508 00:11:19.508 real 0m1.350s 00:11:19.508 user 0m0.027s 00:11:19.508 sys 0m0.122s 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:19.508 ************************************ 00:11:19.508 END TEST filesystem_in_capsule_btrfs 00:11:19.508 ************************************ 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.508 ************************************ 00:11:19.508 START TEST filesystem_in_capsule_xfs 00:11:19.508 ************************************ 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:19.508 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:19.768 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:19.768 = sectsz=512 attr=2, projid32bit=1 00:11:19.768 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:19.768 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:19.768 data = bsize=4096 blocks=130560, imaxpct=25 00:11:19.768 = sunit=0 swidth=0 blks 00:11:19.768 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:19.768 log =internal log bsize=4096 blocks=16384, version=2 00:11:19.768 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:19.768 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:20.708 Discarding blocks...Done. 00:11:20.708 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:20.709 11:48:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1810399 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.619 00:11:22.619 real 0m2.920s 00:11:22.619 user 0m0.033s 00:11:22.619 sys 0m0.075s 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:22.619 ************************************ 00:11:22.619 END TEST filesystem_in_capsule_xfs 00:11:22.619 ************************************ 00:11:22.619 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1810399 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1810399 ']' 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1810399 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.880 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1810399 00:11:23.140 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.140 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.140 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1810399' 00:11:23.140 killing process with pid 1810399 00:11:23.140 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1810399 00:11:23.140 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1810399 00:11:23.140 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:23.140 00:11:23.140 real 0m20.751s 00:11:23.140 user 1m22.126s 00:11:23.140 sys 0m1.450s 00:11:23.140 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.140 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.140 ************************************ 00:11:23.140 END TEST nvmf_filesystem_in_capsule 00:11:23.140 ************************************ 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.410 rmmod nvme_tcp 00:11:23.410 rmmod nvme_fabrics 00:11:23.410 rmmod nvme_keyring 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:23.410 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.411 11:48:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.330 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:25.330 00:11:25.330 real 0m48.511s 00:11:25.330 user 2m32.231s 00:11:25.330 sys 0m9.059s 00:11:25.330 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.330 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.330 ************************************ 00:11:25.330 END TEST nvmf_filesystem 00:11:25.330 ************************************ 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.591 ************************************ 00:11:25.591 START TEST nvmf_target_discovery 00:11:25.591 ************************************ 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:25.591 * Looking for test storage... 00:11:25.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.591 --rc genhtml_branch_coverage=1 00:11:25.591 --rc genhtml_function_coverage=1 00:11:25.591 --rc genhtml_legend=1 00:11:25.591 --rc geninfo_all_blocks=1 00:11:25.591 --rc geninfo_unexecuted_blocks=1 00:11:25.591 00:11:25.591 ' 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.591 --rc genhtml_branch_coverage=1 00:11:25.591 --rc genhtml_function_coverage=1 00:11:25.591 --rc genhtml_legend=1 00:11:25.591 --rc geninfo_all_blocks=1 00:11:25.591 --rc geninfo_unexecuted_blocks=1 00:11:25.591 00:11:25.591 ' 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.591 --rc genhtml_branch_coverage=1 00:11:25.591 --rc genhtml_function_coverage=1 00:11:25.591 --rc genhtml_legend=1 00:11:25.591 --rc geninfo_all_blocks=1 00:11:25.591 --rc geninfo_unexecuted_blocks=1 00:11:25.591 00:11:25.591 ' 00:11:25.591 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.591 --rc genhtml_branch_coverage=1 00:11:25.591 --rc genhtml_function_coverage=1 00:11:25.591 --rc genhtml_legend=1 00:11:25.591 --rc geninfo_all_blocks=1 00:11:25.591 --rc geninfo_unexecuted_blocks=1 00:11:25.591 00:11:25.592 ' 00:11:25.592 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:25.853 11:48:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:33.992 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:33.992 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:33.992 Found net devices under 0000:31:00.0: cvl_0_0 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:33.992 Found net devices under 0000:31:00.1: cvl_0_1 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.992 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:11:33.993 00:11:33.993 --- 10.0.0.2 ping statistics --- 00:11:33.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.993 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:33.993 00:11:33.993 --- 10.0.0.1 ping statistics --- 00:11:33.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.993 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:33.993 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1819427 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1819427 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1819427 ']' 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.993 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:33.993 [2024-10-11 11:48:36.078587] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:33.993 [2024-10-11 11:48:36.078652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.993 [2024-10-11 11:48:36.172193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.993 [2024-10-11 11:48:36.227057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.993 [2024-10-11 11:48:36.227116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.993 [2024-10-11 11:48:36.227125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.993 [2024-10-11 11:48:36.227132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.993 [2024-10-11 11:48:36.227138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.993 [2024-10-11 11:48:36.229147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.993 [2024-10-11 11:48:36.229281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.993 [2024-10-11 11:48:36.229553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.993 [2024-10-11 11:48:36.229557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.254 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 [2024-10-11 11:48:36.961584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 Null1 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.515 11:48:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.515 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:34.515 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.515 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.515 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.515 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.515 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 [2024-10-11 11:48:37.022024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 Null2 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 Null3 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 Null4 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.516 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:34.777 00:11:34.777 Discovery Log Number of Records 6, Generation counter 6 00:11:34.778 =====Discovery Log Entry 0====== 00:11:34.778 trtype: tcp 00:11:34.778 adrfam: ipv4 00:11:34.778 subtype: current discovery subsystem 00:11:34.778 treq: not required 00:11:34.778 portid: 0 00:11:34.778 trsvcid: 4420 00:11:34.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:34.778 traddr: 10.0.0.2 00:11:34.778 eflags: explicit discovery connections, duplicate discovery information 00:11:34.778 sectype: none 00:11:34.778 =====Discovery Log Entry 1====== 00:11:34.778 trtype: tcp 00:11:34.778 adrfam: ipv4 00:11:34.778 subtype: nvme subsystem 00:11:34.778 treq: not required 00:11:34.778 portid: 0 00:11:34.778 trsvcid: 4420 00:11:34.778 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:34.778 traddr: 10.0.0.2 00:11:34.778 eflags: none 00:11:34.778 sectype: none 00:11:34.778 =====Discovery Log Entry 2====== 00:11:34.778 trtype: tcp 00:11:34.778 adrfam: ipv4 00:11:34.778 subtype: nvme subsystem 00:11:34.778 treq: not required 00:11:34.778 portid: 0 00:11:34.778 trsvcid: 4420 00:11:34.778 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:34.778 traddr: 10.0.0.2 00:11:34.778 eflags: none 00:11:34.778 sectype: none 00:11:34.778 =====Discovery Log Entry 3====== 00:11:34.778 trtype: tcp 00:11:34.778 adrfam: ipv4 00:11:34.778 subtype: nvme subsystem 00:11:34.778 treq: not required 00:11:34.778 portid: 0 00:11:34.778 trsvcid: 4420 00:11:34.778 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:34.778 traddr: 10.0.0.2 00:11:34.778 eflags: none 00:11:34.778 sectype: none 00:11:34.778 =====Discovery Log Entry 4====== 00:11:34.778 trtype: tcp 00:11:34.778 adrfam: ipv4 00:11:34.778 subtype: nvme subsystem 00:11:34.778 treq: not required 00:11:34.778 portid: 0 00:11:34.778 trsvcid: 4420 00:11:34.778 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:34.778 traddr: 10.0.0.2 00:11:34.778 eflags: none 00:11:34.778 sectype: none 00:11:34.778 =====Discovery Log Entry 5====== 00:11:34.778 trtype: tcp 00:11:34.778 adrfam: ipv4 00:11:34.778 subtype: discovery subsystem referral 00:11:34.778 treq: not required 00:11:34.778 portid: 0 00:11:34.778 trsvcid: 4430 00:11:34.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:34.778 traddr: 10.0.0.2 00:11:34.778 eflags: none 00:11:34.778 sectype: none 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:34.778 Perform nvmf subsystem discovery via RPC 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 [ 00:11:34.778 { 00:11:34.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:34.778 "subtype": "Discovery", 00:11:34.778 "listen_addresses": [ 00:11:34.778 { 00:11:34.778 "trtype": "TCP", 00:11:34.778 "adrfam": "IPv4", 00:11:34.778 "traddr": "10.0.0.2", 00:11:34.778 "trsvcid": "4420" 00:11:34.778 } 00:11:34.778 ], 00:11:34.778 "allow_any_host": true, 00:11:34.778 "hosts": [] 00:11:34.778 }, 00:11:34.778 { 00:11:34.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.778 "subtype": "NVMe", 00:11:34.778 "listen_addresses": [ 00:11:34.778 { 00:11:34.778 "trtype": "TCP", 00:11:34.778 "adrfam": "IPv4", 00:11:34.778 "traddr": "10.0.0.2", 00:11:34.778 "trsvcid": "4420" 00:11:34.778 } 00:11:34.778 ], 00:11:34.778 "allow_any_host": true, 00:11:34.778 "hosts": [], 00:11:34.778 "serial_number": "SPDK00000000000001", 00:11:34.778 "model_number": "SPDK bdev Controller", 00:11:34.778 "max_namespaces": 32, 00:11:34.778 "min_cntlid": 1, 00:11:34.778 "max_cntlid": 65519, 00:11:34.778 "namespaces": [ 00:11:34.778 { 00:11:34.778 "nsid": 1, 00:11:34.778 "bdev_name": "Null1", 00:11:34.778 "name": "Null1", 00:11:34.778 "nguid": "70FBEC3039514999A0849440A54768A3", 00:11:34.778 "uuid": "70fbec30-3951-4999-a084-9440a54768a3" 00:11:34.778 } 00:11:34.778 ] 00:11:34.778 }, 00:11:34.778 { 00:11:34.778 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:34.778 "subtype": "NVMe", 00:11:34.778 "listen_addresses": [ 00:11:34.778 { 00:11:34.778 "trtype": "TCP", 00:11:34.778 "adrfam": "IPv4", 00:11:34.778 "traddr": "10.0.0.2", 00:11:34.778 "trsvcid": "4420" 00:11:34.778 } 00:11:34.778 ], 00:11:34.778 "allow_any_host": true, 00:11:34.778 "hosts": [], 00:11:34.778 "serial_number": "SPDK00000000000002", 00:11:34.778 "model_number": "SPDK bdev Controller", 00:11:34.778 "max_namespaces": 32, 00:11:34.778 "min_cntlid": 1, 00:11:34.778 "max_cntlid": 65519, 00:11:34.778 "namespaces": [ 00:11:34.778 { 00:11:34.778 "nsid": 1, 00:11:34.778 "bdev_name": "Null2", 00:11:34.778 "name": "Null2", 00:11:34.778 "nguid": "F2920B5879A74BEAB1BC49778B8E8B09", 00:11:34.778 "uuid": "f2920b58-79a7-4bea-b1bc-49778b8e8b09" 00:11:34.778 } 00:11:34.778 ] 00:11:34.778 }, 00:11:34.778 { 00:11:34.778 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:34.778 "subtype": "NVMe", 00:11:34.778 "listen_addresses": [ 00:11:34.778 { 00:11:34.778 "trtype": "TCP", 00:11:34.778 "adrfam": "IPv4", 00:11:34.778 "traddr": "10.0.0.2", 00:11:34.778 "trsvcid": "4420" 00:11:34.778 } 00:11:34.778 ], 00:11:34.778 "allow_any_host": true, 00:11:34.778 "hosts": [], 00:11:34.778 "serial_number": "SPDK00000000000003", 00:11:34.778 "model_number": "SPDK bdev Controller", 00:11:34.778 "max_namespaces": 32, 00:11:34.778 "min_cntlid": 1, 00:11:34.778 "max_cntlid": 65519, 00:11:34.778 "namespaces": [ 00:11:34.778 { 00:11:34.778 "nsid": 1, 00:11:34.778 "bdev_name": "Null3", 00:11:34.778 "name": "Null3", 00:11:34.778 "nguid": "26CC0F0E95D44BD68960823127E92990", 00:11:34.778 "uuid": "26cc0f0e-95d4-4bd6-8960-823127e92990" 00:11:34.778 } 00:11:34.778 ] 00:11:34.778 }, 00:11:34.778 { 00:11:34.778 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:34.778 "subtype": "NVMe", 00:11:34.778 "listen_addresses": [ 00:11:34.778 { 00:11:34.778 "trtype": "TCP", 00:11:34.778 "adrfam": "IPv4", 00:11:34.778 "traddr": "10.0.0.2", 00:11:34.778 "trsvcid": "4420" 00:11:34.778 } 00:11:34.778 ], 00:11:34.778 "allow_any_host": true, 00:11:34.778 "hosts": [], 00:11:34.778 "serial_number": "SPDK00000000000004", 00:11:34.778 "model_number": "SPDK bdev Controller", 00:11:34.778 "max_namespaces": 32, 00:11:34.778 "min_cntlid": 1, 00:11:34.778 "max_cntlid": 65519, 00:11:34.778 "namespaces": [ 00:11:34.778 { 00:11:34.778 "nsid": 1, 00:11:34.778 "bdev_name": "Null4", 00:11:34.778 "name": "Null4", 00:11:34.778 "nguid": "6F6D8D0F73B34382BFE57CC3D11FEA6F", 00:11:34.778 "uuid": "6f6d8d0f-73b3-4382-bfe5-7cc3d11fea6f" 00:11:34.778 } 00:11:34.778 ] 00:11:34.778 } 00:11:34.778 ] 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.778 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.039 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.040 rmmod nvme_tcp 00:11:35.040 rmmod nvme_fabrics 00:11:35.040 rmmod nvme_keyring 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1819427 ']' 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1819427 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1819427 ']' 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1819427 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.040 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1819427 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1819427' 00:11:35.301 killing process with pid 1819427 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1819427 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1819427 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.301 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.849 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:37.849 00:11:37.849 real 0m11.849s 00:11:37.849 user 0m8.985s 00:11:37.849 sys 0m6.216s 00:11:37.849 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.849 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:37.849 ************************************ 00:11:37.849 END TEST nvmf_target_discovery 00:11:37.849 ************************************ 00:11:37.849 11:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:37.849 11:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:37.850 11:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.850 11:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.850 ************************************ 00:11:37.850 START TEST nvmf_referrals 00:11:37.850 ************************************ 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:37.850 * Looking for test storage... 00:11:37.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.850 --rc genhtml_branch_coverage=1 00:11:37.850 --rc genhtml_function_coverage=1 00:11:37.850 --rc genhtml_legend=1 00:11:37.850 --rc geninfo_all_blocks=1 00:11:37.850 --rc geninfo_unexecuted_blocks=1 00:11:37.850 00:11:37.850 ' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.850 --rc genhtml_branch_coverage=1 00:11:37.850 --rc genhtml_function_coverage=1 00:11:37.850 --rc genhtml_legend=1 00:11:37.850 --rc geninfo_all_blocks=1 00:11:37.850 --rc geninfo_unexecuted_blocks=1 00:11:37.850 00:11:37.850 ' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.850 --rc genhtml_branch_coverage=1 00:11:37.850 --rc genhtml_function_coverage=1 00:11:37.850 --rc genhtml_legend=1 00:11:37.850 --rc geninfo_all_blocks=1 00:11:37.850 --rc geninfo_unexecuted_blocks=1 00:11:37.850 00:11:37.850 ' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.850 --rc genhtml_branch_coverage=1 00:11:37.850 --rc genhtml_function_coverage=1 00:11:37.850 --rc genhtml_legend=1 00:11:37.850 --rc geninfo_all_blocks=1 00:11:37.850 --rc geninfo_unexecuted_blocks=1 00:11:37.850 00:11:37.850 ' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.850 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:37.851 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.998 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:45.999 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:45.999 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:45.999 Found net devices under 0000:31:00.0: cvl_0_0 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:45.999 Found net devices under 0000:31:00.1: cvl_0_1 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:45.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:11:45.999 00:11:45.999 --- 10.0.0.2 ping statistics --- 00:11:45.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.999 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:11:45.999 00:11:45.999 --- 10.0.0.1 ping statistics --- 00:11:45.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.999 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:45.999 11:48:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1823967 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1823967 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1823967 ']' 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:45.999 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:45.999 [2024-10-11 11:48:48.076027] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:45.999 [2024-10-11 11:48:48.076103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.999 [2024-10-11 11:48:48.165454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:45.999 [2024-10-11 11:48:48.218726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.999 [2024-10-11 11:48:48.218777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.999 [2024-10-11 11:48:48.218786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.000 [2024-10-11 11:48:48.218792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.000 [2024-10-11 11:48:48.218799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.000 [2024-10-11 11:48:48.220913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.000 [2024-10-11 11:48:48.221093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.000 [2024-10-11 11:48:48.221229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.000 [2024-10-11 11:48:48.221230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.261 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.261 [2024-10-11 11:48:48.961908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 [2024-10-11 11:48:48.978223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.522 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:46.522 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:46.783 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.044 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:47.305 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:47.305 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:47.305 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:47.305 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:47.305 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:47.305 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.305 11:48:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:47.566 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:47.566 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:47.566 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:47.566 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:47.566 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.566 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.827 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.087 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:48.347 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:48.347 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:48.347 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.347 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.347 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.347 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:48.348 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:48.348 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.348 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.348 11:48:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.348 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:48.348 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:48.348 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:48.348 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:48.348 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:48.348 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.348 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.608 rmmod nvme_tcp 00:11:48.608 rmmod nvme_fabrics 00:11:48.608 rmmod nvme_keyring 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1823967 ']' 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1823967 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1823967 ']' 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1823967 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:48.608 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1823967 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1823967' 00:11:48.869 killing process with pid 1823967 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1823967 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1823967 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.869 11:48:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.413 00:11:51.413 real 0m13.531s 00:11:51.413 user 0m16.287s 00:11:51.413 sys 0m6.616s 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 ************************************ 00:11:51.413 END TEST nvmf_referrals 00:11:51.413 ************************************ 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 ************************************ 00:11:51.413 START TEST nvmf_connect_disconnect 00:11:51.413 ************************************ 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:51.413 * Looking for test storage... 00:11:51.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.413 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:51.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.414 --rc genhtml_branch_coverage=1 00:11:51.414 --rc genhtml_function_coverage=1 00:11:51.414 --rc genhtml_legend=1 00:11:51.414 --rc geninfo_all_blocks=1 00:11:51.414 --rc geninfo_unexecuted_blocks=1 00:11:51.414 00:11:51.414 ' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:51.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.414 --rc genhtml_branch_coverage=1 00:11:51.414 --rc genhtml_function_coverage=1 00:11:51.414 --rc genhtml_legend=1 00:11:51.414 --rc geninfo_all_blocks=1 00:11:51.414 --rc geninfo_unexecuted_blocks=1 00:11:51.414 00:11:51.414 ' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:51.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.414 --rc genhtml_branch_coverage=1 00:11:51.414 --rc genhtml_function_coverage=1 00:11:51.414 --rc genhtml_legend=1 00:11:51.414 --rc geninfo_all_blocks=1 00:11:51.414 --rc geninfo_unexecuted_blocks=1 00:11:51.414 00:11:51.414 ' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:51.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.414 --rc genhtml_branch_coverage=1 00:11:51.414 --rc genhtml_function_coverage=1 00:11:51.414 --rc genhtml_legend=1 00:11:51.414 --rc geninfo_all_blocks=1 00:11:51.414 --rc geninfo_unexecuted_blocks=1 00:11:51.414 00:11:51.414 ' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:51.414 11:48:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.565 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.565 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.565 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.565 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.565 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.565 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.565 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:59.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:59.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:59.566 Found net devices under 0000:31:00.0: cvl_0_0 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:59.566 Found net devices under 0000:31:00.1: cvl_0_1 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:11:59.566 00:11:59.566 --- 10.0.0.2 ping statistics --- 00:11:59.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.566 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:11:59.566 00:11:59.566 --- 10.0.0.1 ping statistics --- 00:11:59.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.566 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.566 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1829095 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1829095 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1829095 ']' 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.567 11:49:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:59.567 [2024-10-11 11:49:01.696802] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:59.567 [2024-10-11 11:49:01.696869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.567 [2024-10-11 11:49:01.787771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.567 [2024-10-11 11:49:01.840903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.567 [2024-10-11 11:49:01.840977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.567 [2024-10-11 11:49:01.840986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.567 [2024-10-11 11:49:01.840993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.567 [2024-10-11 11:49:01.840999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.567 [2024-10-11 11:49:01.843080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.567 [2024-10-11 11:49:01.843225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.567 [2024-10-11 11:49:01.843385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.567 [2024-10-11 11:49:01.843385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.828 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:59.828 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:59.828 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:59.828 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:59.828 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.089 [2024-10-11 11:49:02.575943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.089 [2024-10-11 11:49:02.655853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:00.089 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:04.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.481 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:18.481 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:18.481 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:18.481 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:18.481 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.481 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:18.481 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.481 11:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.481 rmmod nvme_tcp 00:12:18.481 rmmod nvme_fabrics 00:12:18.481 rmmod nvme_keyring 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1829095 ']' 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1829095 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1829095 ']' 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1829095 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1829095 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1829095' 00:12:18.481 killing process with pid 1829095 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1829095 00:12:18.481 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1829095 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.742 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.656 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.656 00:12:20.656 real 0m29.626s 00:12:20.656 user 1m19.248s 00:12:20.656 sys 0m7.375s 00:12:20.656 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.656 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.656 ************************************ 00:12:20.656 END TEST nvmf_connect_disconnect 00:12:20.656 ************************************ 00:12:20.656 11:49:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.656 11:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:20.656 11:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.656 11:49:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.917 ************************************ 00:12:20.917 START TEST nvmf_multitarget 00:12:20.917 ************************************ 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.917 * Looking for test storage... 00:12:20.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:20.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.917 --rc genhtml_branch_coverage=1 00:12:20.917 --rc genhtml_function_coverage=1 00:12:20.917 --rc genhtml_legend=1 00:12:20.917 --rc geninfo_all_blocks=1 00:12:20.917 --rc geninfo_unexecuted_blocks=1 00:12:20.917 00:12:20.917 ' 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:20.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.917 --rc genhtml_branch_coverage=1 00:12:20.917 --rc genhtml_function_coverage=1 00:12:20.917 --rc genhtml_legend=1 00:12:20.917 --rc geninfo_all_blocks=1 00:12:20.917 --rc geninfo_unexecuted_blocks=1 00:12:20.917 00:12:20.917 ' 00:12:20.917 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:20.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.918 --rc genhtml_branch_coverage=1 00:12:20.918 --rc genhtml_function_coverage=1 00:12:20.918 --rc genhtml_legend=1 00:12:20.918 --rc geninfo_all_blocks=1 00:12:20.918 --rc geninfo_unexecuted_blocks=1 00:12:20.918 00:12:20.918 ' 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:20.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.918 --rc genhtml_branch_coverage=1 00:12:20.918 --rc genhtml_function_coverage=1 00:12:20.918 --rc genhtml_legend=1 00:12:20.918 --rc geninfo_all_blocks=1 00:12:20.918 --rc geninfo_unexecuted_blocks=1 00:12:20.918 00:12:20.918 ' 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.918 11:49:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:29.064 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:29.064 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.064 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:29.065 Found net devices under 0000:31:00.0: cvl_0_0 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:29.065 Found net devices under 0000:31:00.1: cvl_0_1 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.065 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:12:29.065 00:12:29.065 --- 10.0.0.2 ping statistics --- 00:12:29.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.065 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:12:29.065 00:12:29.065 --- 10.0.0.1 ping statistics --- 00:12:29.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.065 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1837292 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1837292 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1837292 ']' 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.065 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.065 [2024-10-11 11:49:31.366497] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:29.065 [2024-10-11 11:49:31.366558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.065 [2024-10-11 11:49:31.456686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.065 [2024-10-11 11:49:31.509584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.065 [2024-10-11 11:49:31.509634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.065 [2024-10-11 11:49:31.509643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.065 [2024-10-11 11:49:31.509650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.065 [2024-10-11 11:49:31.509657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.065 [2024-10-11 11:49:31.511779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.065 [2024-10-11 11:49:31.511942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.065 [2024-10-11 11:49:31.512122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.065 [2024-10-11 11:49:31.512122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.638 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:29.900 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:29.900 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:29.900 "nvmf_tgt_1" 00:12:29.900 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:29.900 "nvmf_tgt_2" 00:12:29.900 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.900 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:30.161 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:30.161 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:30.161 true 00:12:30.161 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:30.422 true 00:12:30.422 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:30.422 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.422 rmmod nvme_tcp 00:12:30.422 rmmod nvme_fabrics 00:12:30.422 rmmod nvme_keyring 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1837292 ']' 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1837292 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1837292 ']' 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1837292 00:12:30.422 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1837292 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1837292' 00:12:30.684 killing process with pid 1837292 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1837292 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1837292 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.684 11:49:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.232 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.232 00:12:33.232 real 0m12.071s 00:12:33.232 user 0m10.360s 00:12:33.233 sys 0m6.372s 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:33.233 ************************************ 00:12:33.233 END TEST nvmf_multitarget 00:12:33.233 ************************************ 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.233 ************************************ 00:12:33.233 START TEST nvmf_rpc 00:12:33.233 ************************************ 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:33.233 * Looking for test storage... 00:12:33.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:33.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.233 --rc genhtml_branch_coverage=1 00:12:33.233 --rc genhtml_function_coverage=1 00:12:33.233 --rc genhtml_legend=1 00:12:33.233 --rc geninfo_all_blocks=1 00:12:33.233 --rc geninfo_unexecuted_blocks=1 00:12:33.233 00:12:33.233 ' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:33.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.233 --rc genhtml_branch_coverage=1 00:12:33.233 --rc genhtml_function_coverage=1 00:12:33.233 --rc genhtml_legend=1 00:12:33.233 --rc geninfo_all_blocks=1 00:12:33.233 --rc geninfo_unexecuted_blocks=1 00:12:33.233 00:12:33.233 ' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:33.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.233 --rc genhtml_branch_coverage=1 00:12:33.233 --rc genhtml_function_coverage=1 00:12:33.233 --rc genhtml_legend=1 00:12:33.233 --rc geninfo_all_blocks=1 00:12:33.233 --rc geninfo_unexecuted_blocks=1 00:12:33.233 00:12:33.233 ' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:33.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.233 --rc genhtml_branch_coverage=1 00:12:33.233 --rc genhtml_function_coverage=1 00:12:33.233 --rc genhtml_legend=1 00:12:33.233 --rc geninfo_all_blocks=1 00:12:33.233 --rc geninfo_unexecuted_blocks=1 00:12:33.233 00:12:33.233 ' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.233 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.234 11:49:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:41.378 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:41.378 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:41.378 Found net devices under 0000:31:00.0: cvl_0_0 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:41.378 Found net devices under 0000:31:00.1: cvl_0_1 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.378 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:12:41.379 00:12:41.379 --- 10.0.0.2 ping statistics --- 00:12:41.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.379 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:12:41.379 00:12:41.379 --- 10.0.0.1 ping statistics --- 00:12:41.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.379 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1842058 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1842058 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1842058 ']' 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.379 11:49:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.379 [2024-10-11 11:49:43.596807] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:41.379 [2024-10-11 11:49:43.596871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.379 [2024-10-11 11:49:43.686423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.379 [2024-10-11 11:49:43.739371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.379 [2024-10-11 11:49:43.739420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.379 [2024-10-11 11:49:43.739433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.379 [2024-10-11 11:49:43.739440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.379 [2024-10-11 11:49:43.739446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.379 [2024-10-11 11:49:43.741525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.379 [2024-10-11 11:49:43.741687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.379 [2024-10-11 11:49:43.741845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.379 [2024-10-11 11:49:43.741845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.951 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:41.951 "tick_rate": 2400000000, 00:12:41.951 "poll_groups": [ 00:12:41.951 { 00:12:41.951 "name": "nvmf_tgt_poll_group_000", 00:12:41.951 "admin_qpairs": 0, 00:12:41.951 "io_qpairs": 0, 00:12:41.951 "current_admin_qpairs": 0, 00:12:41.951 "current_io_qpairs": 0, 00:12:41.951 "pending_bdev_io": 0, 00:12:41.951 "completed_nvme_io": 0, 00:12:41.951 "transports": [] 00:12:41.951 }, 00:12:41.951 { 00:12:41.951 "name": "nvmf_tgt_poll_group_001", 00:12:41.951 "admin_qpairs": 0, 00:12:41.951 "io_qpairs": 0, 00:12:41.952 "current_admin_qpairs": 0, 00:12:41.952 "current_io_qpairs": 0, 00:12:41.952 "pending_bdev_io": 0, 00:12:41.952 "completed_nvme_io": 0, 00:12:41.952 "transports": [] 00:12:41.952 }, 00:12:41.952 { 00:12:41.952 "name": "nvmf_tgt_poll_group_002", 00:12:41.952 "admin_qpairs": 0, 00:12:41.952 "io_qpairs": 0, 00:12:41.952 "current_admin_qpairs": 0, 00:12:41.952 "current_io_qpairs": 0, 00:12:41.952 "pending_bdev_io": 0, 00:12:41.952 "completed_nvme_io": 0, 00:12:41.952 "transports": [] 00:12:41.952 }, 00:12:41.952 { 00:12:41.952 "name": "nvmf_tgt_poll_group_003", 00:12:41.952 "admin_qpairs": 0, 00:12:41.952 "io_qpairs": 0, 00:12:41.952 "current_admin_qpairs": 0, 00:12:41.952 "current_io_qpairs": 0, 00:12:41.952 "pending_bdev_io": 0, 00:12:41.952 "completed_nvme_io": 0, 00:12:41.952 "transports": [] 00:12:41.952 } 00:12:41.952 ] 00:12:41.952 }' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.952 [2024-10-11 11:49:44.586379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:41.952 "tick_rate": 2400000000, 00:12:41.952 "poll_groups": [ 00:12:41.952 { 00:12:41.952 "name": "nvmf_tgt_poll_group_000", 00:12:41.952 "admin_qpairs": 0, 00:12:41.952 "io_qpairs": 0, 00:12:41.952 "current_admin_qpairs": 0, 00:12:41.952 "current_io_qpairs": 0, 00:12:41.952 "pending_bdev_io": 0, 00:12:41.952 "completed_nvme_io": 0, 00:12:41.952 "transports": [ 00:12:41.952 { 00:12:41.952 "trtype": "TCP" 00:12:41.952 } 00:12:41.952 ] 00:12:41.952 }, 00:12:41.952 { 00:12:41.952 "name": "nvmf_tgt_poll_group_001", 00:12:41.952 "admin_qpairs": 0, 00:12:41.952 "io_qpairs": 0, 00:12:41.952 "current_admin_qpairs": 0, 00:12:41.952 "current_io_qpairs": 0, 00:12:41.952 "pending_bdev_io": 0, 00:12:41.952 "completed_nvme_io": 0, 00:12:41.952 "transports": [ 00:12:41.952 { 00:12:41.952 "trtype": "TCP" 00:12:41.952 } 00:12:41.952 ] 00:12:41.952 }, 00:12:41.952 { 00:12:41.952 "name": "nvmf_tgt_poll_group_002", 00:12:41.952 "admin_qpairs": 0, 00:12:41.952 "io_qpairs": 0, 00:12:41.952 "current_admin_qpairs": 0, 00:12:41.952 "current_io_qpairs": 0, 00:12:41.952 "pending_bdev_io": 0, 00:12:41.952 "completed_nvme_io": 0, 00:12:41.952 "transports": [ 00:12:41.952 { 00:12:41.952 "trtype": "TCP" 00:12:41.952 } 00:12:41.952 ] 00:12:41.952 }, 00:12:41.952 { 00:12:41.952 "name": "nvmf_tgt_poll_group_003", 00:12:41.952 "admin_qpairs": 0, 00:12:41.952 "io_qpairs": 0, 00:12:41.952 "current_admin_qpairs": 0, 00:12:41.952 "current_io_qpairs": 0, 00:12:41.952 "pending_bdev_io": 0, 00:12:41.952 "completed_nvme_io": 0, 00:12:41.952 "transports": [ 00:12:41.952 { 00:12:41.952 "trtype": "TCP" 00:12:41.952 } 00:12:41.952 ] 00:12:41.952 } 00:12:41.952 ] 00:12:41.952 }' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:41.952 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.214 Malloc1 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.214 [2024-10-11 11:49:44.799143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:42.214 [2024-10-11 11:49:44.836120] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:42.214 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:42.214 could not add new controller: failed to write to nvme-fabrics device 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.214 11:49:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.127 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.127 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.127 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.127 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:44.127 11:49:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.037 [2024-10-11 11:49:48.622863] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:46.037 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:46.037 could not add new controller: failed to write to nvme-fabrics device 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.037 11:49:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.948 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.948 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.948 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.948 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:47.948 11:49:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.860 [2024-10-11 11:49:52.389401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.860 11:49:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.243 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.243 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.243 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.243 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:51.243 11:49:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:53.783 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:53.783 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:53.783 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.783 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:53.783 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.783 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:53.783 11:49:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.783 [2024-10-11 11:49:56.113186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.783 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.784 11:49:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.168 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.168 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.168 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.168 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.168 11:49:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.078 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.078 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.078 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.078 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.078 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.078 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.078 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 [2024-10-11 11:49:59.885654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.339 11:49:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.250 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.250 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.250 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.250 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:59.250 11:50:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 [2024-10-11 11:50:03.749653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.161 11:50:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.075 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.075 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.075 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.075 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:03.075 11:50:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 [2024-10-11 11:50:07.430469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.997 11:50:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.381 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.381 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.381 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.381 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.381 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.428 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.428 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.428 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.428 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.428 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.428 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:08.428 11:50:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.428 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 [2024-10-11 11:50:11.163522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 [2024-10-11 11:50:11.235706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 [2024-10-11 11:50:11.303879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.689 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.690 [2024-10-11 11:50:11.372113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.690 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.950 [2024-10-11 11:50:11.444330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.950 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:08.951 "tick_rate": 2400000000, 00:13:08.951 "poll_groups": [ 00:13:08.951 { 00:13:08.951 "name": "nvmf_tgt_poll_group_000", 00:13:08.951 "admin_qpairs": 0, 00:13:08.951 "io_qpairs": 224, 00:13:08.951 "current_admin_qpairs": 0, 00:13:08.951 "current_io_qpairs": 0, 00:13:08.951 "pending_bdev_io": 0, 00:13:08.951 "completed_nvme_io": 521, 00:13:08.951 "transports": [ 00:13:08.951 { 00:13:08.951 "trtype": "TCP" 00:13:08.951 } 00:13:08.951 ] 00:13:08.951 }, 00:13:08.951 { 00:13:08.951 "name": "nvmf_tgt_poll_group_001", 00:13:08.951 "admin_qpairs": 1, 00:13:08.951 "io_qpairs": 223, 00:13:08.951 "current_admin_qpairs": 0, 00:13:08.951 "current_io_qpairs": 0, 00:13:08.951 "pending_bdev_io": 0, 00:13:08.951 "completed_nvme_io": 223, 00:13:08.951 "transports": [ 00:13:08.951 { 00:13:08.951 "trtype": "TCP" 00:13:08.951 } 00:13:08.951 ] 00:13:08.951 }, 00:13:08.951 { 00:13:08.951 "name": "nvmf_tgt_poll_group_002", 00:13:08.951 "admin_qpairs": 6, 00:13:08.951 "io_qpairs": 218, 00:13:08.951 "current_admin_qpairs": 0, 00:13:08.951 "current_io_qpairs": 0, 00:13:08.951 "pending_bdev_io": 0, 00:13:08.951 "completed_nvme_io": 222, 00:13:08.951 "transports": [ 00:13:08.951 { 00:13:08.951 "trtype": "TCP" 00:13:08.951 } 00:13:08.951 ] 00:13:08.951 }, 00:13:08.951 { 00:13:08.951 "name": "nvmf_tgt_poll_group_003", 00:13:08.951 "admin_qpairs": 0, 00:13:08.951 "io_qpairs": 224, 00:13:08.951 "current_admin_qpairs": 0, 00:13:08.951 "current_io_qpairs": 0, 00:13:08.951 "pending_bdev_io": 0, 00:13:08.951 "completed_nvme_io": 273, 00:13:08.951 "transports": [ 00:13:08.951 { 00:13:08.951 "trtype": "TCP" 00:13:08.951 } 00:13:08.951 ] 00:13:08.951 } 00:13:08.951 ] 00:13:08.951 }' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.951 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.951 rmmod nvme_tcp 00:13:08.951 rmmod nvme_fabrics 00:13:09.212 rmmod nvme_keyring 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1842058 ']' 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1842058 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1842058 ']' 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1842058 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1842058 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1842058' 00:13:09.212 killing process with pid 1842058 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1842058 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1842058 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.212 11:50:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.760 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.760 00:13:11.760 real 0m38.428s 00:13:11.760 user 1m54.540s 00:13:11.760 sys 0m8.101s 00:13:11.760 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.760 11:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.760 ************************************ 00:13:11.760 END TEST nvmf_rpc 00:13:11.760 ************************************ 00:13:11.760 11:50:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.760 11:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:11.760 11:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.760 11:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.760 ************************************ 00:13:11.760 START TEST nvmf_invalid 00:13:11.760 ************************************ 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.760 * Looking for test storage... 00:13:11.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:11.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.760 --rc genhtml_branch_coverage=1 00:13:11.760 --rc genhtml_function_coverage=1 00:13:11.760 --rc genhtml_legend=1 00:13:11.760 --rc geninfo_all_blocks=1 00:13:11.760 --rc geninfo_unexecuted_blocks=1 00:13:11.760 00:13:11.760 ' 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:11.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.760 --rc genhtml_branch_coverage=1 00:13:11.760 --rc genhtml_function_coverage=1 00:13:11.760 --rc genhtml_legend=1 00:13:11.760 --rc geninfo_all_blocks=1 00:13:11.760 --rc geninfo_unexecuted_blocks=1 00:13:11.760 00:13:11.760 ' 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:11.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.760 --rc genhtml_branch_coverage=1 00:13:11.760 --rc genhtml_function_coverage=1 00:13:11.760 --rc genhtml_legend=1 00:13:11.760 --rc geninfo_all_blocks=1 00:13:11.760 --rc geninfo_unexecuted_blocks=1 00:13:11.760 00:13:11.760 ' 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:11.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.760 --rc genhtml_branch_coverage=1 00:13:11.760 --rc genhtml_function_coverage=1 00:13:11.760 --rc genhtml_legend=1 00:13:11.760 --rc geninfo_all_blocks=1 00:13:11.760 --rc geninfo_unexecuted_blocks=1 00:13:11.760 00:13:11.760 ' 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.760 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.761 11:50:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.905 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:19.906 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:19.906 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:19.906 Found net devices under 0000:31:00.0: cvl_0_0 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:19.906 Found net devices under 0000:31:00.1: cvl_0_1 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:19.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:13:19.906 00:13:19.906 --- 10.0.0.2 ping statistics --- 00:13:19.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.906 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:13:19.906 00:13:19.906 --- 10.0.0.1 ping statistics --- 00:13:19.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.906 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:19.906 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1851993 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1851993 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1851993 ']' 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.906 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.906 [2024-10-11 11:50:22.074026] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:19.906 [2024-10-11 11:50:22.074119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.906 [2024-10-11 11:50:22.166354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.906 [2024-10-11 11:50:22.219771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.906 [2024-10-11 11:50:22.219819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.906 [2024-10-11 11:50:22.219827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.906 [2024-10-11 11:50:22.219834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.907 [2024-10-11 11:50:22.219840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.907 [2024-10-11 11:50:22.222302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.907 [2024-10-11 11:50:22.222463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.907 [2024-10-11 11:50:22.222622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.907 [2024-10-11 11:50:22.222622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.480 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.480 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:20.480 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:20.480 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.480 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:20.480 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.480 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:20.480 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2037 00:13:20.480 [2024-10-11 11:50:23.107268] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:20.480 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:20.480 { 00:13:20.480 "nqn": "nqn.2016-06.io.spdk:cnode2037", 00:13:20.480 "tgt_name": "foobar", 00:13:20.480 "method": "nvmf_create_subsystem", 00:13:20.480 "req_id": 1 00:13:20.480 } 00:13:20.480 Got JSON-RPC error response 00:13:20.480 response: 00:13:20.480 { 00:13:20.480 "code": -32603, 00:13:20.480 "message": "Unable to find target foobar" 00:13:20.480 }' 00:13:20.480 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:20.480 { 00:13:20.480 "nqn": "nqn.2016-06.io.spdk:cnode2037", 00:13:20.480 "tgt_name": "foobar", 00:13:20.480 "method": "nvmf_create_subsystem", 00:13:20.480 "req_id": 1 00:13:20.480 } 00:13:20.480 Got JSON-RPC error response 00:13:20.480 response: 00:13:20.480 { 00:13:20.480 "code": -32603, 00:13:20.480 "message": "Unable to find target foobar" 00:13:20.480 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:20.480 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:20.480 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4816 00:13:20.742 [2024-10-11 11:50:23.316171] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4816: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:20.742 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:20.742 { 00:13:20.742 "nqn": "nqn.2016-06.io.spdk:cnode4816", 00:13:20.742 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:20.742 "method": "nvmf_create_subsystem", 00:13:20.742 "req_id": 1 00:13:20.742 } 00:13:20.742 Got JSON-RPC error response 00:13:20.742 response: 00:13:20.742 { 00:13:20.742 "code": -32602, 00:13:20.742 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:20.742 }' 00:13:20.742 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:20.742 { 00:13:20.742 "nqn": "nqn.2016-06.io.spdk:cnode4816", 00:13:20.742 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:20.742 "method": "nvmf_create_subsystem", 00:13:20.742 "req_id": 1 00:13:20.742 } 00:13:20.742 Got JSON-RPC error response 00:13:20.742 response: 00:13:20.742 { 00:13:20.742 "code": -32602, 00:13:20.742 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:20.742 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:20.742 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:20.742 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21077 00:13:21.004 [2024-10-11 11:50:23.524839] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21077: invalid model number 'SPDK_Controller' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:21.004 { 00:13:21.004 "nqn": "nqn.2016-06.io.spdk:cnode21077", 00:13:21.004 "model_number": "SPDK_Controller\u001f", 00:13:21.004 "method": "nvmf_create_subsystem", 00:13:21.004 "req_id": 1 00:13:21.004 } 00:13:21.004 Got JSON-RPC error response 00:13:21.004 response: 00:13:21.004 { 00:13:21.004 "code": -32602, 00:13:21.004 "message": "Invalid MN SPDK_Controller\u001f" 00:13:21.004 }' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:21.004 { 00:13:21.004 "nqn": "nqn.2016-06.io.spdk:cnode21077", 00:13:21.004 "model_number": "SPDK_Controller\u001f", 00:13:21.004 "method": "nvmf_create_subsystem", 00:13:21.004 "req_id": 1 00:13:21.004 } 00:13:21.004 Got JSON-RPC error response 00:13:21.004 response: 00:13:21.004 { 00:13:21.004 "code": -32602, 00:13:21.004 "message": "Invalid MN SPDK_Controller\u001f" 00:13:21.004 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.004 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.005 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ']C2(&v?A`42[[[;HVd~$<' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ']C2(&v?A`42[[[;HVd~$<' nqn.2016-06.io.spdk:cnode18377 00:13:21.266 [2024-10-11 11:50:23.906287] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18377: invalid serial number ']C2(&v?A`42[[[;HVd~$<' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:21.266 { 00:13:21.266 "nqn": "nqn.2016-06.io.spdk:cnode18377", 00:13:21.266 "serial_number": "]C2(&v?A`42[[[;HVd~$<", 00:13:21.266 "method": "nvmf_create_subsystem", 00:13:21.266 "req_id": 1 00:13:21.266 } 00:13:21.266 Got JSON-RPC error response 00:13:21.266 response: 00:13:21.266 { 00:13:21.266 "code": -32602, 00:13:21.266 "message": "Invalid SN ]C2(&v?A`42[[[;HVd~$<" 00:13:21.266 }' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:21.266 { 00:13:21.266 "nqn": "nqn.2016-06.io.spdk:cnode18377", 00:13:21.266 "serial_number": "]C2(&v?A`42[[[;HVd~$<", 00:13:21.266 "method": "nvmf_create_subsystem", 00:13:21.266 "req_id": 1 00:13:21.266 } 00:13:21.266 Got JSON-RPC error response 00:13:21.266 response: 00:13:21.266 { 00:13:21.266 "code": -32602, 00:13:21.266 "message": "Invalid SN ]C2(&v?A`42[[[;HVd~$<" 00:13:21.266 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.266 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.527 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:21.527 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:21.528 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.529 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'P{)sQRwXI9FN~[}bzOhXID'\''CT2$A|vI%0y`(' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'P{)sQRwXI9FN~[}bzOhXID'\''CT2$A|vI%0y`(' nqn.2016-06.io.spdk:cnode31653 00:13:21.789 [2024-10-11 11:50:24.424031] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31653: invalid model number 'P{)sQRwXI9FN~[}bzOhXID'CT2$A|vI%0y`(' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:21.789 { 00:13:21.789 "nqn": "nqn.2016-06.io.spdk:cnode31653", 00:13:21.789 "model_number": "P{)sQRwXI9FN~[}bzOhXID'\''CT2$A|vI%0y`(", 00:13:21.789 "method": "nvmf_create_subsystem", 00:13:21.789 "req_id": 1 00:13:21.789 } 00:13:21.789 Got JSON-RPC error response 00:13:21.789 response: 00:13:21.789 { 00:13:21.789 "code": -32602, 00:13:21.789 "message": "Invalid MN P{)sQRwXI9FN~[}bzOhXID'\''CT2$A|vI%0y`(" 00:13:21.789 }' 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:21.789 { 00:13:21.789 "nqn": "nqn.2016-06.io.spdk:cnode31653", 00:13:21.789 "model_number": "P{)sQRwXI9FN~[}bzOhXID'CT2$A|vI%0y`(", 00:13:21.789 "method": "nvmf_create_subsystem", 00:13:21.789 "req_id": 1 00:13:21.789 } 00:13:21.789 Got JSON-RPC error response 00:13:21.789 response: 00:13:21.789 { 00:13:21.789 "code": -32602, 00:13:21.789 "message": "Invalid MN P{)sQRwXI9FN~[}bzOhXID'CT2$A|vI%0y`(" 00:13:21.789 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:21.789 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:22.050 [2024-10-11 11:50:24.608695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.050 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:22.311 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:22.311 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:22.311 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:22.311 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:22.311 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:22.311 [2024-10-11 11:50:24.986904] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:22.572 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:22.572 { 00:13:22.572 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:22.572 "listen_address": { 00:13:22.572 "trtype": "tcp", 00:13:22.572 "traddr": "", 00:13:22.572 "trsvcid": "4421" 00:13:22.572 }, 00:13:22.572 "method": "nvmf_subsystem_remove_listener", 00:13:22.572 "req_id": 1 00:13:22.572 } 00:13:22.572 Got JSON-RPC error response 00:13:22.572 response: 00:13:22.572 { 00:13:22.572 "code": -32602, 00:13:22.572 "message": "Invalid parameters" 00:13:22.572 }' 00:13:22.572 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:22.572 { 00:13:22.572 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:22.572 "listen_address": { 00:13:22.572 "trtype": "tcp", 00:13:22.572 "traddr": "", 00:13:22.572 "trsvcid": "4421" 00:13:22.572 }, 00:13:22.572 "method": "nvmf_subsystem_remove_listener", 00:13:22.572 "req_id": 1 00:13:22.572 } 00:13:22.572 Got JSON-RPC error response 00:13:22.572 response: 00:13:22.572 { 00:13:22.572 "code": -32602, 00:13:22.572 "message": "Invalid parameters" 00:13:22.572 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:22.572 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18133 -i 0 00:13:22.572 [2024-10-11 11:50:25.175448] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18133: invalid cntlid range [0-65519] 00:13:22.572 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:22.572 { 00:13:22.572 "nqn": "nqn.2016-06.io.spdk:cnode18133", 00:13:22.572 "min_cntlid": 0, 00:13:22.572 "method": "nvmf_create_subsystem", 00:13:22.572 "req_id": 1 00:13:22.572 } 00:13:22.572 Got JSON-RPC error response 00:13:22.572 response: 00:13:22.572 { 00:13:22.572 "code": -32602, 00:13:22.572 "message": "Invalid cntlid range [0-65519]" 00:13:22.572 }' 00:13:22.572 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:22.572 { 00:13:22.572 "nqn": "nqn.2016-06.io.spdk:cnode18133", 00:13:22.572 "min_cntlid": 0, 00:13:22.572 "method": "nvmf_create_subsystem", 00:13:22.572 "req_id": 1 00:13:22.572 } 00:13:22.572 Got JSON-RPC error response 00:13:22.572 response: 00:13:22.572 { 00:13:22.572 "code": -32602, 00:13:22.572 "message": "Invalid cntlid range [0-65519]" 00:13:22.572 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.572 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24480 -i 65520 00:13:22.831 [2024-10-11 11:50:25.364121] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24480: invalid cntlid range [65520-65519] 00:13:22.831 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:22.831 { 00:13:22.831 "nqn": "nqn.2016-06.io.spdk:cnode24480", 00:13:22.831 "min_cntlid": 65520, 00:13:22.831 "method": "nvmf_create_subsystem", 00:13:22.831 "req_id": 1 00:13:22.831 } 00:13:22.831 Got JSON-RPC error response 00:13:22.831 response: 00:13:22.831 { 00:13:22.831 "code": -32602, 00:13:22.831 "message": "Invalid cntlid range [65520-65519]" 00:13:22.831 }' 00:13:22.831 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:22.831 { 00:13:22.831 "nqn": "nqn.2016-06.io.spdk:cnode24480", 00:13:22.831 "min_cntlid": 65520, 00:13:22.831 "method": "nvmf_create_subsystem", 00:13:22.831 "req_id": 1 00:13:22.831 } 00:13:22.831 Got JSON-RPC error response 00:13:22.831 response: 00:13:22.831 { 00:13:22.831 "code": -32602, 00:13:22.831 "message": "Invalid cntlid range [65520-65519]" 00:13:22.831 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:22.831 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14158 -I 0 00:13:23.092 [2024-10-11 11:50:25.552688] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14158: invalid cntlid range [1-0] 00:13:23.092 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:23.092 { 00:13:23.092 "nqn": "nqn.2016-06.io.spdk:cnode14158", 00:13:23.092 "max_cntlid": 0, 00:13:23.092 "method": "nvmf_create_subsystem", 00:13:23.092 "req_id": 1 00:13:23.092 } 00:13:23.092 Got JSON-RPC error response 00:13:23.092 response: 00:13:23.092 { 00:13:23.092 "code": -32602, 00:13:23.092 "message": "Invalid cntlid range [1-0]" 00:13:23.092 }' 00:13:23.092 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:23.092 { 00:13:23.092 "nqn": "nqn.2016-06.io.spdk:cnode14158", 00:13:23.092 "max_cntlid": 0, 00:13:23.092 "method": "nvmf_create_subsystem", 00:13:23.092 "req_id": 1 00:13:23.092 } 00:13:23.092 Got JSON-RPC error response 00:13:23.092 response: 00:13:23.092 { 00:13:23.092 "code": -32602, 00:13:23.092 "message": "Invalid cntlid range [1-0]" 00:13:23.092 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:23.092 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30331 -I 65520 00:13:23.092 [2024-10-11 11:50:25.737296] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30331: invalid cntlid range [1-65520] 00:13:23.092 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:23.092 { 00:13:23.092 "nqn": "nqn.2016-06.io.spdk:cnode30331", 00:13:23.092 "max_cntlid": 65520, 00:13:23.092 "method": "nvmf_create_subsystem", 00:13:23.092 "req_id": 1 00:13:23.092 } 00:13:23.092 Got JSON-RPC error response 00:13:23.092 response: 00:13:23.092 { 00:13:23.092 "code": -32602, 00:13:23.092 "message": "Invalid cntlid range [1-65520]" 00:13:23.092 }' 00:13:23.092 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:23.092 { 00:13:23.092 "nqn": "nqn.2016-06.io.spdk:cnode30331", 00:13:23.092 "max_cntlid": 65520, 00:13:23.092 "method": "nvmf_create_subsystem", 00:13:23.092 "req_id": 1 00:13:23.092 } 00:13:23.092 Got JSON-RPC error response 00:13:23.092 response: 00:13:23.092 { 00:13:23.092 "code": -32602, 00:13:23.092 "message": "Invalid cntlid range [1-65520]" 00:13:23.092 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:23.092 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23334 -i 6 -I 5 00:13:23.369 [2024-10-11 11:50:25.925909] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23334: invalid cntlid range [6-5] 00:13:23.369 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:23.369 { 00:13:23.369 "nqn": "nqn.2016-06.io.spdk:cnode23334", 00:13:23.369 "min_cntlid": 6, 00:13:23.369 "max_cntlid": 5, 00:13:23.369 "method": "nvmf_create_subsystem", 00:13:23.369 "req_id": 1 00:13:23.369 } 00:13:23.369 Got JSON-RPC error response 00:13:23.369 response: 00:13:23.369 { 00:13:23.369 "code": -32602, 00:13:23.369 "message": "Invalid cntlid range [6-5]" 00:13:23.369 }' 00:13:23.369 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:23.369 { 00:13:23.369 "nqn": "nqn.2016-06.io.spdk:cnode23334", 00:13:23.369 "min_cntlid": 6, 00:13:23.369 "max_cntlid": 5, 00:13:23.369 "method": "nvmf_create_subsystem", 00:13:23.369 "req_id": 1 00:13:23.369 } 00:13:23.369 Got JSON-RPC error response 00:13:23.369 response: 00:13:23.369 { 00:13:23.369 "code": -32602, 00:13:23.369 "message": "Invalid cntlid range [6-5]" 00:13:23.369 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:23.369 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:23.369 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:23.369 { 00:13:23.369 "name": "foobar", 00:13:23.369 "method": "nvmf_delete_target", 00:13:23.369 "req_id": 1 00:13:23.369 } 00:13:23.369 Got JSON-RPC error response 00:13:23.369 response: 00:13:23.369 { 00:13:23.369 "code": -32602, 00:13:23.369 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:23.369 }' 00:13:23.369 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:23.369 { 00:13:23.369 "name": "foobar", 00:13:23.369 "method": "nvmf_delete_target", 00:13:23.369 "req_id": 1 00:13:23.369 } 00:13:23.369 Got JSON-RPC error response 00:13:23.369 response: 00:13:23.369 { 00:13:23.369 "code": -32602, 00:13:23.369 "message": "The specified target doesn't exist, cannot delete it." 00:13:23.369 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:23.369 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:23.369 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:23.369 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:23.369 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.630 rmmod nvme_tcp 00:13:23.630 rmmod nvme_fabrics 00:13:23.630 rmmod nvme_keyring 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1851993 ']' 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1851993 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1851993 ']' 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1851993 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1851993 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1851993' 00:13:23.630 killing process with pid 1851993 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1851993 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1851993 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.630 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.176 00:13:26.176 real 0m14.373s 00:13:26.176 user 0m21.088s 00:13:26.176 sys 0m6.826s 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:26.176 ************************************ 00:13:26.176 END TEST nvmf_invalid 00:13:26.176 ************************************ 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.176 ************************************ 00:13:26.176 START TEST nvmf_connect_stress 00:13:26.176 ************************************ 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:26.176 * Looking for test storage... 00:13:26.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.176 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:26.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.177 --rc genhtml_branch_coverage=1 00:13:26.177 --rc genhtml_function_coverage=1 00:13:26.177 --rc genhtml_legend=1 00:13:26.177 --rc geninfo_all_blocks=1 00:13:26.177 --rc geninfo_unexecuted_blocks=1 00:13:26.177 00:13:26.177 ' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:26.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.177 --rc genhtml_branch_coverage=1 00:13:26.177 --rc genhtml_function_coverage=1 00:13:26.177 --rc genhtml_legend=1 00:13:26.177 --rc geninfo_all_blocks=1 00:13:26.177 --rc geninfo_unexecuted_blocks=1 00:13:26.177 00:13:26.177 ' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:26.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.177 --rc genhtml_branch_coverage=1 00:13:26.177 --rc genhtml_function_coverage=1 00:13:26.177 --rc genhtml_legend=1 00:13:26.177 --rc geninfo_all_blocks=1 00:13:26.177 --rc geninfo_unexecuted_blocks=1 00:13:26.177 00:13:26.177 ' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:26.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.177 --rc genhtml_branch_coverage=1 00:13:26.177 --rc genhtml_function_coverage=1 00:13:26.177 --rc genhtml_legend=1 00:13:26.177 --rc geninfo_all_blocks=1 00:13:26.177 --rc geninfo_unexecuted_blocks=1 00:13:26.177 00:13:26.177 ' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:26.177 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:26.178 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.178 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.178 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.178 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:26.178 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:26.178 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.178 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:34.322 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.322 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.323 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.323 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:34.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:34.323 Found net devices under 0000:31:00.0: cvl_0_0 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:34.323 Found net devices under 0000:31:00.1: cvl_0_1 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:34.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:13:34.323 00:13:34.323 --- 10.0.0.2 ping statistics --- 00:13:34.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.323 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:13:34.323 00:13:34.323 --- 10.0.0.1 ping statistics --- 00:13:34.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.323 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1857244 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1857244 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1857244 ']' 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.323 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.323 [2024-10-11 11:50:36.430561] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:34.323 [2024-10-11 11:50:36.430627] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.323 [2024-10-11 11:50:36.522491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:34.323 [2024-10-11 11:50:36.574525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.323 [2024-10-11 11:50:36.574578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.323 [2024-10-11 11:50:36.574586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.323 [2024-10-11 11:50:36.574593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.323 [2024-10-11 11:50:36.574599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.323 [2024-10-11 11:50:36.576529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.323 [2024-10-11 11:50:36.576689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.323 [2024-10-11 11:50:36.576689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.586 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.586 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:34.586 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:34.586 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:34.586 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.848 [2024-10-11 11:50:37.307523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.848 [2024-10-11 11:50:37.333359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:34.848 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.849 NULL1 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1857362 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.849 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.110 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.110 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:35.110 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.110 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.110 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.683 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.683 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:35.683 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.683 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.683 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.944 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.944 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:35.944 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.944 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.944 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.205 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.205 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:36.205 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.205 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.205 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.466 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.466 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:36.466 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.466 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.466 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.726 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.726 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:36.726 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.726 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.726 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.297 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.297 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:37.297 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.297 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.297 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.557 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.557 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:37.557 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.557 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.557 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.817 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.817 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:37.817 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.817 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.817 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.078 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.078 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:38.078 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.078 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.078 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.338 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.338 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:38.338 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.339 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.339 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.909 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.909 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:38.909 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.909 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.909 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.169 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.169 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:39.169 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.169 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.169 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.429 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.429 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:39.429 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.429 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.429 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.690 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.690 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:39.690 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.690 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.690 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.261 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.261 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:40.261 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.261 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.261 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.521 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.521 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:40.521 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.521 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.521 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.782 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.782 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:40.782 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.782 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.782 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.041 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.041 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:41.041 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.041 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.041 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.301 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.301 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:41.301 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.301 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.301 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.871 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.871 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:41.871 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.871 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.871 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.132 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.132 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:42.132 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.132 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.132 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.392 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.392 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:42.392 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.392 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.392 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.651 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.651 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:42.651 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.651 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.651 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.912 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.912 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:42.912 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.912 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.912 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.482 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.482 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:43.482 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.482 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.482 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.742 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.742 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:43.742 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.742 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.742 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.002 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.002 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:44.002 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.002 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.002 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.262 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.262 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:44.262 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.262 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.262 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.522 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.522 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:44.522 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.522 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.522 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.093 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.093 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:45.093 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.093 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.093 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.093 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1857362 00:13:45.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1857362) - No such process 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1857362 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:45.354 rmmod nvme_tcp 00:13:45.354 rmmod nvme_fabrics 00:13:45.354 rmmod nvme_keyring 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1857244 ']' 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1857244 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1857244 ']' 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1857244 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:45.354 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857244 00:13:45.354 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:45.354 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:45.354 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857244' 00:13:45.354 killing process with pid 1857244 00:13:45.354 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1857244 00:13:45.354 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1857244 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.615 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.528 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:47.528 00:13:47.528 real 0m21.714s 00:13:47.528 user 0m43.355s 00:13:47.528 sys 0m9.376s 00:13:47.528 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.528 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.528 ************************************ 00:13:47.528 END TEST nvmf_connect_stress 00:13:47.528 ************************************ 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:47.789 ************************************ 00:13:47.789 START TEST nvmf_fused_ordering 00:13:47.789 ************************************ 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:47.789 * Looking for test storage... 00:13:47.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:47.789 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:47.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.790 --rc genhtml_branch_coverage=1 00:13:47.790 --rc genhtml_function_coverage=1 00:13:47.790 --rc genhtml_legend=1 00:13:47.790 --rc geninfo_all_blocks=1 00:13:47.790 --rc geninfo_unexecuted_blocks=1 00:13:47.790 00:13:47.790 ' 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:47.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.790 --rc genhtml_branch_coverage=1 00:13:47.790 --rc genhtml_function_coverage=1 00:13:47.790 --rc genhtml_legend=1 00:13:47.790 --rc geninfo_all_blocks=1 00:13:47.790 --rc geninfo_unexecuted_blocks=1 00:13:47.790 00:13:47.790 ' 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:47.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.790 --rc genhtml_branch_coverage=1 00:13:47.790 --rc genhtml_function_coverage=1 00:13:47.790 --rc genhtml_legend=1 00:13:47.790 --rc geninfo_all_blocks=1 00:13:47.790 --rc geninfo_unexecuted_blocks=1 00:13:47.790 00:13:47.790 ' 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:47.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.790 --rc genhtml_branch_coverage=1 00:13:47.790 --rc genhtml_function_coverage=1 00:13:47.790 --rc genhtml_legend=1 00:13:47.790 --rc geninfo_all_blocks=1 00:13:47.790 --rc geninfo_unexecuted_blocks=1 00:13:47.790 00:13:47.790 ' 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.790 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.052 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:56.195 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:56.195 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:56.195 Found net devices under 0000:31:00.0: cvl_0_0 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:56.195 Found net devices under 0000:31:00.1: cvl_0_1 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.195 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.195 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.195 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.195 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.195 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.195 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.195 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.195 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.195 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:13:56.196 00:13:56.196 --- 10.0.0.2 ping statistics --- 00:13:56.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.196 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:13:56.196 00:13:56.196 --- 10.0.0.1 ping statistics --- 00:13:56.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.196 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1863719 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1863719 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1863719 ']' 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.196 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.196 [2024-10-11 11:50:58.276408] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:56.196 [2024-10-11 11:50:58.276475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.196 [2024-10-11 11:50:58.367434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.196 [2024-10-11 11:50:58.418542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.196 [2024-10-11 11:50:58.418588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.196 [2024-10-11 11:50:58.418597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.196 [2024-10-11 11:50:58.418605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.196 [2024-10-11 11:50:58.418611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.196 [2024-10-11 11:50:58.419408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.457 [2024-10-11 11:50:59.140583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.457 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.718 [2024-10-11 11:50:59.164839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.718 NULL1 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.718 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:56.718 [2024-10-11 11:50:59.233703] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:56.718 [2024-10-11 11:50:59.233757] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1864048 ] 00:13:56.979 Attached to nqn.2016-06.io.spdk:cnode1 00:13:56.979 Namespace ID: 1 size: 1GB 00:13:56.979 fused_ordering(0) 00:13:56.979 fused_ordering(1) 00:13:56.979 fused_ordering(2) 00:13:56.979 fused_ordering(3) 00:13:56.979 fused_ordering(4) 00:13:56.979 fused_ordering(5) 00:13:56.979 fused_ordering(6) 00:13:56.979 fused_ordering(7) 00:13:56.979 fused_ordering(8) 00:13:56.979 fused_ordering(9) 00:13:56.979 fused_ordering(10) 00:13:56.979 fused_ordering(11) 00:13:56.979 fused_ordering(12) 00:13:56.979 fused_ordering(13) 00:13:56.979 fused_ordering(14) 00:13:56.979 fused_ordering(15) 00:13:56.979 fused_ordering(16) 00:13:56.979 fused_ordering(17) 00:13:56.979 fused_ordering(18) 00:13:56.979 fused_ordering(19) 00:13:56.979 fused_ordering(20) 00:13:56.979 fused_ordering(21) 00:13:56.979 fused_ordering(22) 00:13:56.979 fused_ordering(23) 00:13:56.979 fused_ordering(24) 00:13:56.979 fused_ordering(25) 00:13:56.979 fused_ordering(26) 00:13:56.979 fused_ordering(27) 00:13:56.979 fused_ordering(28) 00:13:56.979 fused_ordering(29) 00:13:56.979 fused_ordering(30) 00:13:56.979 fused_ordering(31) 00:13:56.979 fused_ordering(32) 00:13:56.979 fused_ordering(33) 00:13:56.979 fused_ordering(34) 00:13:56.979 fused_ordering(35) 00:13:56.979 fused_ordering(36) 00:13:56.979 fused_ordering(37) 00:13:56.979 fused_ordering(38) 00:13:56.979 fused_ordering(39) 00:13:56.979 fused_ordering(40) 00:13:56.979 fused_ordering(41) 00:13:56.979 fused_ordering(42) 00:13:56.979 fused_ordering(43) 00:13:56.979 fused_ordering(44) 00:13:56.979 fused_ordering(45) 00:13:56.979 fused_ordering(46) 00:13:56.979 fused_ordering(47) 00:13:56.979 fused_ordering(48) 00:13:56.979 fused_ordering(49) 00:13:56.979 fused_ordering(50) 00:13:56.979 fused_ordering(51) 00:13:56.979 fused_ordering(52) 00:13:56.979 fused_ordering(53) 00:13:56.979 fused_ordering(54) 00:13:56.979 fused_ordering(55) 00:13:56.979 fused_ordering(56) 00:13:56.979 fused_ordering(57) 00:13:56.979 fused_ordering(58) 00:13:56.979 fused_ordering(59) 00:13:56.979 fused_ordering(60) 00:13:56.979 fused_ordering(61) 00:13:56.979 fused_ordering(62) 00:13:56.979 fused_ordering(63) 00:13:56.979 fused_ordering(64) 00:13:56.979 fused_ordering(65) 00:13:56.979 fused_ordering(66) 00:13:56.979 fused_ordering(67) 00:13:56.979 fused_ordering(68) 00:13:56.979 fused_ordering(69) 00:13:56.979 fused_ordering(70) 00:13:56.979 fused_ordering(71) 00:13:56.979 fused_ordering(72) 00:13:56.979 fused_ordering(73) 00:13:56.979 fused_ordering(74) 00:13:56.979 fused_ordering(75) 00:13:56.979 fused_ordering(76) 00:13:56.979 fused_ordering(77) 00:13:56.979 fused_ordering(78) 00:13:56.979 fused_ordering(79) 00:13:56.979 fused_ordering(80) 00:13:56.979 fused_ordering(81) 00:13:56.979 fused_ordering(82) 00:13:56.979 fused_ordering(83) 00:13:56.979 fused_ordering(84) 00:13:56.979 fused_ordering(85) 00:13:56.979 fused_ordering(86) 00:13:56.979 fused_ordering(87) 00:13:56.979 fused_ordering(88) 00:13:56.979 fused_ordering(89) 00:13:56.979 fused_ordering(90) 00:13:56.979 fused_ordering(91) 00:13:56.979 fused_ordering(92) 00:13:56.979 fused_ordering(93) 00:13:56.979 fused_ordering(94) 00:13:56.979 fused_ordering(95) 00:13:56.979 fused_ordering(96) 00:13:56.979 fused_ordering(97) 00:13:56.979 fused_ordering(98) 00:13:56.979 fused_ordering(99) 00:13:56.979 fused_ordering(100) 00:13:56.979 fused_ordering(101) 00:13:56.979 fused_ordering(102) 00:13:56.979 fused_ordering(103) 00:13:56.979 fused_ordering(104) 00:13:56.979 fused_ordering(105) 00:13:56.979 fused_ordering(106) 00:13:56.979 fused_ordering(107) 00:13:56.979 fused_ordering(108) 00:13:56.979 fused_ordering(109) 00:13:56.979 fused_ordering(110) 00:13:56.979 fused_ordering(111) 00:13:56.979 fused_ordering(112) 00:13:56.979 fused_ordering(113) 00:13:56.979 fused_ordering(114) 00:13:56.979 fused_ordering(115) 00:13:56.979 fused_ordering(116) 00:13:56.979 fused_ordering(117) 00:13:56.979 fused_ordering(118) 00:13:56.979 fused_ordering(119) 00:13:56.979 fused_ordering(120) 00:13:56.979 fused_ordering(121) 00:13:56.979 fused_ordering(122) 00:13:56.979 fused_ordering(123) 00:13:56.979 fused_ordering(124) 00:13:56.979 fused_ordering(125) 00:13:56.979 fused_ordering(126) 00:13:56.979 fused_ordering(127) 00:13:56.979 fused_ordering(128) 00:13:56.979 fused_ordering(129) 00:13:56.979 fused_ordering(130) 00:13:56.979 fused_ordering(131) 00:13:56.979 fused_ordering(132) 00:13:56.979 fused_ordering(133) 00:13:56.979 fused_ordering(134) 00:13:56.979 fused_ordering(135) 00:13:56.979 fused_ordering(136) 00:13:56.979 fused_ordering(137) 00:13:56.979 fused_ordering(138) 00:13:56.979 fused_ordering(139) 00:13:56.979 fused_ordering(140) 00:13:56.979 fused_ordering(141) 00:13:56.979 fused_ordering(142) 00:13:56.979 fused_ordering(143) 00:13:56.979 fused_ordering(144) 00:13:56.979 fused_ordering(145) 00:13:56.979 fused_ordering(146) 00:13:56.979 fused_ordering(147) 00:13:56.979 fused_ordering(148) 00:13:56.979 fused_ordering(149) 00:13:56.979 fused_ordering(150) 00:13:56.979 fused_ordering(151) 00:13:56.979 fused_ordering(152) 00:13:56.979 fused_ordering(153) 00:13:56.979 fused_ordering(154) 00:13:56.979 fused_ordering(155) 00:13:56.979 fused_ordering(156) 00:13:56.979 fused_ordering(157) 00:13:56.979 fused_ordering(158) 00:13:56.979 fused_ordering(159) 00:13:56.979 fused_ordering(160) 00:13:56.979 fused_ordering(161) 00:13:56.979 fused_ordering(162) 00:13:56.979 fused_ordering(163) 00:13:56.979 fused_ordering(164) 00:13:56.979 fused_ordering(165) 00:13:56.979 fused_ordering(166) 00:13:56.979 fused_ordering(167) 00:13:56.979 fused_ordering(168) 00:13:56.979 fused_ordering(169) 00:13:56.979 fused_ordering(170) 00:13:56.979 fused_ordering(171) 00:13:56.979 fused_ordering(172) 00:13:56.979 fused_ordering(173) 00:13:56.979 fused_ordering(174) 00:13:56.979 fused_ordering(175) 00:13:56.979 fused_ordering(176) 00:13:56.979 fused_ordering(177) 00:13:56.979 fused_ordering(178) 00:13:56.979 fused_ordering(179) 00:13:56.979 fused_ordering(180) 00:13:56.979 fused_ordering(181) 00:13:56.979 fused_ordering(182) 00:13:56.979 fused_ordering(183) 00:13:56.979 fused_ordering(184) 00:13:56.979 fused_ordering(185) 00:13:56.979 fused_ordering(186) 00:13:56.979 fused_ordering(187) 00:13:56.979 fused_ordering(188) 00:13:56.979 fused_ordering(189) 00:13:56.979 fused_ordering(190) 00:13:56.979 fused_ordering(191) 00:13:56.979 fused_ordering(192) 00:13:56.979 fused_ordering(193) 00:13:56.979 fused_ordering(194) 00:13:56.979 fused_ordering(195) 00:13:56.979 fused_ordering(196) 00:13:56.979 fused_ordering(197) 00:13:56.979 fused_ordering(198) 00:13:56.979 fused_ordering(199) 00:13:56.979 fused_ordering(200) 00:13:56.979 fused_ordering(201) 00:13:56.979 fused_ordering(202) 00:13:56.979 fused_ordering(203) 00:13:56.979 fused_ordering(204) 00:13:56.979 fused_ordering(205) 00:13:57.552 fused_ordering(206) 00:13:57.552 fused_ordering(207) 00:13:57.552 fused_ordering(208) 00:13:57.552 fused_ordering(209) 00:13:57.552 fused_ordering(210) 00:13:57.552 fused_ordering(211) 00:13:57.552 fused_ordering(212) 00:13:57.552 fused_ordering(213) 00:13:57.552 fused_ordering(214) 00:13:57.552 fused_ordering(215) 00:13:57.552 fused_ordering(216) 00:13:57.552 fused_ordering(217) 00:13:57.552 fused_ordering(218) 00:13:57.552 fused_ordering(219) 00:13:57.552 fused_ordering(220) 00:13:57.552 fused_ordering(221) 00:13:57.552 fused_ordering(222) 00:13:57.552 fused_ordering(223) 00:13:57.552 fused_ordering(224) 00:13:57.552 fused_ordering(225) 00:13:57.552 fused_ordering(226) 00:13:57.552 fused_ordering(227) 00:13:57.552 fused_ordering(228) 00:13:57.552 fused_ordering(229) 00:13:57.552 fused_ordering(230) 00:13:57.552 fused_ordering(231) 00:13:57.552 fused_ordering(232) 00:13:57.552 fused_ordering(233) 00:13:57.552 fused_ordering(234) 00:13:57.552 fused_ordering(235) 00:13:57.552 fused_ordering(236) 00:13:57.552 fused_ordering(237) 00:13:57.552 fused_ordering(238) 00:13:57.552 fused_ordering(239) 00:13:57.552 fused_ordering(240) 00:13:57.552 fused_ordering(241) 00:13:57.552 fused_ordering(242) 00:13:57.552 fused_ordering(243) 00:13:57.552 fused_ordering(244) 00:13:57.552 fused_ordering(245) 00:13:57.552 fused_ordering(246) 00:13:57.552 fused_ordering(247) 00:13:57.552 fused_ordering(248) 00:13:57.552 fused_ordering(249) 00:13:57.552 fused_ordering(250) 00:13:57.552 fused_ordering(251) 00:13:57.552 fused_ordering(252) 00:13:57.552 fused_ordering(253) 00:13:57.552 fused_ordering(254) 00:13:57.552 fused_ordering(255) 00:13:57.552 fused_ordering(256) 00:13:57.552 fused_ordering(257) 00:13:57.552 fused_ordering(258) 00:13:57.552 fused_ordering(259) 00:13:57.552 fused_ordering(260) 00:13:57.552 fused_ordering(261) 00:13:57.552 fused_ordering(262) 00:13:57.552 fused_ordering(263) 00:13:57.552 fused_ordering(264) 00:13:57.552 fused_ordering(265) 00:13:57.552 fused_ordering(266) 00:13:57.552 fused_ordering(267) 00:13:57.552 fused_ordering(268) 00:13:57.552 fused_ordering(269) 00:13:57.552 fused_ordering(270) 00:13:57.552 fused_ordering(271) 00:13:57.552 fused_ordering(272) 00:13:57.552 fused_ordering(273) 00:13:57.552 fused_ordering(274) 00:13:57.552 fused_ordering(275) 00:13:57.552 fused_ordering(276) 00:13:57.552 fused_ordering(277) 00:13:57.552 fused_ordering(278) 00:13:57.552 fused_ordering(279) 00:13:57.552 fused_ordering(280) 00:13:57.552 fused_ordering(281) 00:13:57.552 fused_ordering(282) 00:13:57.552 fused_ordering(283) 00:13:57.552 fused_ordering(284) 00:13:57.552 fused_ordering(285) 00:13:57.552 fused_ordering(286) 00:13:57.552 fused_ordering(287) 00:13:57.552 fused_ordering(288) 00:13:57.552 fused_ordering(289) 00:13:57.552 fused_ordering(290) 00:13:57.552 fused_ordering(291) 00:13:57.552 fused_ordering(292) 00:13:57.552 fused_ordering(293) 00:13:57.552 fused_ordering(294) 00:13:57.552 fused_ordering(295) 00:13:57.552 fused_ordering(296) 00:13:57.552 fused_ordering(297) 00:13:57.552 fused_ordering(298) 00:13:57.552 fused_ordering(299) 00:13:57.552 fused_ordering(300) 00:13:57.552 fused_ordering(301) 00:13:57.552 fused_ordering(302) 00:13:57.552 fused_ordering(303) 00:13:57.552 fused_ordering(304) 00:13:57.552 fused_ordering(305) 00:13:57.552 fused_ordering(306) 00:13:57.552 fused_ordering(307) 00:13:57.552 fused_ordering(308) 00:13:57.552 fused_ordering(309) 00:13:57.552 fused_ordering(310) 00:13:57.552 fused_ordering(311) 00:13:57.552 fused_ordering(312) 00:13:57.552 fused_ordering(313) 00:13:57.552 fused_ordering(314) 00:13:57.552 fused_ordering(315) 00:13:57.552 fused_ordering(316) 00:13:57.552 fused_ordering(317) 00:13:57.552 fused_ordering(318) 00:13:57.552 fused_ordering(319) 00:13:57.552 fused_ordering(320) 00:13:57.552 fused_ordering(321) 00:13:57.552 fused_ordering(322) 00:13:57.552 fused_ordering(323) 00:13:57.552 fused_ordering(324) 00:13:57.552 fused_ordering(325) 00:13:57.552 fused_ordering(326) 00:13:57.552 fused_ordering(327) 00:13:57.552 fused_ordering(328) 00:13:57.552 fused_ordering(329) 00:13:57.552 fused_ordering(330) 00:13:57.552 fused_ordering(331) 00:13:57.552 fused_ordering(332) 00:13:57.552 fused_ordering(333) 00:13:57.552 fused_ordering(334) 00:13:57.552 fused_ordering(335) 00:13:57.552 fused_ordering(336) 00:13:57.552 fused_ordering(337) 00:13:57.552 fused_ordering(338) 00:13:57.552 fused_ordering(339) 00:13:57.552 fused_ordering(340) 00:13:57.552 fused_ordering(341) 00:13:57.552 fused_ordering(342) 00:13:57.552 fused_ordering(343) 00:13:57.552 fused_ordering(344) 00:13:57.552 fused_ordering(345) 00:13:57.552 fused_ordering(346) 00:13:57.552 fused_ordering(347) 00:13:57.552 fused_ordering(348) 00:13:57.552 fused_ordering(349) 00:13:57.552 fused_ordering(350) 00:13:57.552 fused_ordering(351) 00:13:57.552 fused_ordering(352) 00:13:57.552 fused_ordering(353) 00:13:57.552 fused_ordering(354) 00:13:57.552 fused_ordering(355) 00:13:57.552 fused_ordering(356) 00:13:57.552 fused_ordering(357) 00:13:57.552 fused_ordering(358) 00:13:57.552 fused_ordering(359) 00:13:57.552 fused_ordering(360) 00:13:57.552 fused_ordering(361) 00:13:57.552 fused_ordering(362) 00:13:57.552 fused_ordering(363) 00:13:57.552 fused_ordering(364) 00:13:57.552 fused_ordering(365) 00:13:57.552 fused_ordering(366) 00:13:57.552 fused_ordering(367) 00:13:57.552 fused_ordering(368) 00:13:57.552 fused_ordering(369) 00:13:57.552 fused_ordering(370) 00:13:57.552 fused_ordering(371) 00:13:57.552 fused_ordering(372) 00:13:57.552 fused_ordering(373) 00:13:57.552 fused_ordering(374) 00:13:57.552 fused_ordering(375) 00:13:57.552 fused_ordering(376) 00:13:57.552 fused_ordering(377) 00:13:57.552 fused_ordering(378) 00:13:57.552 fused_ordering(379) 00:13:57.552 fused_ordering(380) 00:13:57.552 fused_ordering(381) 00:13:57.552 fused_ordering(382) 00:13:57.552 fused_ordering(383) 00:13:57.552 fused_ordering(384) 00:13:57.552 fused_ordering(385) 00:13:57.552 fused_ordering(386) 00:13:57.552 fused_ordering(387) 00:13:57.552 fused_ordering(388) 00:13:57.552 fused_ordering(389) 00:13:57.552 fused_ordering(390) 00:13:57.552 fused_ordering(391) 00:13:57.552 fused_ordering(392) 00:13:57.552 fused_ordering(393) 00:13:57.552 fused_ordering(394) 00:13:57.552 fused_ordering(395) 00:13:57.552 fused_ordering(396) 00:13:57.552 fused_ordering(397) 00:13:57.552 fused_ordering(398) 00:13:57.553 fused_ordering(399) 00:13:57.553 fused_ordering(400) 00:13:57.553 fused_ordering(401) 00:13:57.553 fused_ordering(402) 00:13:57.553 fused_ordering(403) 00:13:57.553 fused_ordering(404) 00:13:57.553 fused_ordering(405) 00:13:57.553 fused_ordering(406) 00:13:57.553 fused_ordering(407) 00:13:57.553 fused_ordering(408) 00:13:57.553 fused_ordering(409) 00:13:57.553 fused_ordering(410) 00:13:57.814 fused_ordering(411) 00:13:57.814 fused_ordering(412) 00:13:57.814 fused_ordering(413) 00:13:57.814 fused_ordering(414) 00:13:57.814 fused_ordering(415) 00:13:57.814 fused_ordering(416) 00:13:57.814 fused_ordering(417) 00:13:57.814 fused_ordering(418) 00:13:57.814 fused_ordering(419) 00:13:57.814 fused_ordering(420) 00:13:57.814 fused_ordering(421) 00:13:57.814 fused_ordering(422) 00:13:57.814 fused_ordering(423) 00:13:57.814 fused_ordering(424) 00:13:57.814 fused_ordering(425) 00:13:57.814 fused_ordering(426) 00:13:57.814 fused_ordering(427) 00:13:57.814 fused_ordering(428) 00:13:57.814 fused_ordering(429) 00:13:57.814 fused_ordering(430) 00:13:57.814 fused_ordering(431) 00:13:57.814 fused_ordering(432) 00:13:57.814 fused_ordering(433) 00:13:57.814 fused_ordering(434) 00:13:57.814 fused_ordering(435) 00:13:57.814 fused_ordering(436) 00:13:57.814 fused_ordering(437) 00:13:57.814 fused_ordering(438) 00:13:57.814 fused_ordering(439) 00:13:57.814 fused_ordering(440) 00:13:57.814 fused_ordering(441) 00:13:57.814 fused_ordering(442) 00:13:57.814 fused_ordering(443) 00:13:57.814 fused_ordering(444) 00:13:57.814 fused_ordering(445) 00:13:57.814 fused_ordering(446) 00:13:57.814 fused_ordering(447) 00:13:57.814 fused_ordering(448) 00:13:57.814 fused_ordering(449) 00:13:57.814 fused_ordering(450) 00:13:57.814 fused_ordering(451) 00:13:57.814 fused_ordering(452) 00:13:57.814 fused_ordering(453) 00:13:57.814 fused_ordering(454) 00:13:57.814 fused_ordering(455) 00:13:57.814 fused_ordering(456) 00:13:57.814 fused_ordering(457) 00:13:57.814 fused_ordering(458) 00:13:57.814 fused_ordering(459) 00:13:57.814 fused_ordering(460) 00:13:57.814 fused_ordering(461) 00:13:57.814 fused_ordering(462) 00:13:57.814 fused_ordering(463) 00:13:57.814 fused_ordering(464) 00:13:57.814 fused_ordering(465) 00:13:57.814 fused_ordering(466) 00:13:57.814 fused_ordering(467) 00:13:57.814 fused_ordering(468) 00:13:57.814 fused_ordering(469) 00:13:57.814 fused_ordering(470) 00:13:57.814 fused_ordering(471) 00:13:57.814 fused_ordering(472) 00:13:57.814 fused_ordering(473) 00:13:57.814 fused_ordering(474) 00:13:57.814 fused_ordering(475) 00:13:57.814 fused_ordering(476) 00:13:57.814 fused_ordering(477) 00:13:57.814 fused_ordering(478) 00:13:57.814 fused_ordering(479) 00:13:57.814 fused_ordering(480) 00:13:57.814 fused_ordering(481) 00:13:57.814 fused_ordering(482) 00:13:57.814 fused_ordering(483) 00:13:57.814 fused_ordering(484) 00:13:57.814 fused_ordering(485) 00:13:57.814 fused_ordering(486) 00:13:57.814 fused_ordering(487) 00:13:57.814 fused_ordering(488) 00:13:57.814 fused_ordering(489) 00:13:57.814 fused_ordering(490) 00:13:57.814 fused_ordering(491) 00:13:57.814 fused_ordering(492) 00:13:57.814 fused_ordering(493) 00:13:57.814 fused_ordering(494) 00:13:57.814 fused_ordering(495) 00:13:57.814 fused_ordering(496) 00:13:57.814 fused_ordering(497) 00:13:57.814 fused_ordering(498) 00:13:57.814 fused_ordering(499) 00:13:57.814 fused_ordering(500) 00:13:57.814 fused_ordering(501) 00:13:57.814 fused_ordering(502) 00:13:57.814 fused_ordering(503) 00:13:57.814 fused_ordering(504) 00:13:57.814 fused_ordering(505) 00:13:57.814 fused_ordering(506) 00:13:57.814 fused_ordering(507) 00:13:57.814 fused_ordering(508) 00:13:57.814 fused_ordering(509) 00:13:57.814 fused_ordering(510) 00:13:57.814 fused_ordering(511) 00:13:57.814 fused_ordering(512) 00:13:57.814 fused_ordering(513) 00:13:57.814 fused_ordering(514) 00:13:57.814 fused_ordering(515) 00:13:57.814 fused_ordering(516) 00:13:57.814 fused_ordering(517) 00:13:57.814 fused_ordering(518) 00:13:57.814 fused_ordering(519) 00:13:57.814 fused_ordering(520) 00:13:57.814 fused_ordering(521) 00:13:57.814 fused_ordering(522) 00:13:57.814 fused_ordering(523) 00:13:57.814 fused_ordering(524) 00:13:57.814 fused_ordering(525) 00:13:57.814 fused_ordering(526) 00:13:57.814 fused_ordering(527) 00:13:57.814 fused_ordering(528) 00:13:57.814 fused_ordering(529) 00:13:57.814 fused_ordering(530) 00:13:57.814 fused_ordering(531) 00:13:57.814 fused_ordering(532) 00:13:57.814 fused_ordering(533) 00:13:57.814 fused_ordering(534) 00:13:57.814 fused_ordering(535) 00:13:57.814 fused_ordering(536) 00:13:57.814 fused_ordering(537) 00:13:57.814 fused_ordering(538) 00:13:57.814 fused_ordering(539) 00:13:57.814 fused_ordering(540) 00:13:57.814 fused_ordering(541) 00:13:57.814 fused_ordering(542) 00:13:57.814 fused_ordering(543) 00:13:57.814 fused_ordering(544) 00:13:57.814 fused_ordering(545) 00:13:57.814 fused_ordering(546) 00:13:57.814 fused_ordering(547) 00:13:57.814 fused_ordering(548) 00:13:57.814 fused_ordering(549) 00:13:57.814 fused_ordering(550) 00:13:57.814 fused_ordering(551) 00:13:57.814 fused_ordering(552) 00:13:57.814 fused_ordering(553) 00:13:57.814 fused_ordering(554) 00:13:57.814 fused_ordering(555) 00:13:57.814 fused_ordering(556) 00:13:57.814 fused_ordering(557) 00:13:57.814 fused_ordering(558) 00:13:57.814 fused_ordering(559) 00:13:57.814 fused_ordering(560) 00:13:57.814 fused_ordering(561) 00:13:57.814 fused_ordering(562) 00:13:57.814 fused_ordering(563) 00:13:57.814 fused_ordering(564) 00:13:57.814 fused_ordering(565) 00:13:57.814 fused_ordering(566) 00:13:57.814 fused_ordering(567) 00:13:57.814 fused_ordering(568) 00:13:57.814 fused_ordering(569) 00:13:57.814 fused_ordering(570) 00:13:57.814 fused_ordering(571) 00:13:57.814 fused_ordering(572) 00:13:57.814 fused_ordering(573) 00:13:57.814 fused_ordering(574) 00:13:57.814 fused_ordering(575) 00:13:57.814 fused_ordering(576) 00:13:57.814 fused_ordering(577) 00:13:57.814 fused_ordering(578) 00:13:57.814 fused_ordering(579) 00:13:57.814 fused_ordering(580) 00:13:57.814 fused_ordering(581) 00:13:57.814 fused_ordering(582) 00:13:57.814 fused_ordering(583) 00:13:57.814 fused_ordering(584) 00:13:57.814 fused_ordering(585) 00:13:57.814 fused_ordering(586) 00:13:57.814 fused_ordering(587) 00:13:57.814 fused_ordering(588) 00:13:57.814 fused_ordering(589) 00:13:57.814 fused_ordering(590) 00:13:57.814 fused_ordering(591) 00:13:57.814 fused_ordering(592) 00:13:57.814 fused_ordering(593) 00:13:57.814 fused_ordering(594) 00:13:57.814 fused_ordering(595) 00:13:57.814 fused_ordering(596) 00:13:57.814 fused_ordering(597) 00:13:57.814 fused_ordering(598) 00:13:57.814 fused_ordering(599) 00:13:57.814 fused_ordering(600) 00:13:57.814 fused_ordering(601) 00:13:57.814 fused_ordering(602) 00:13:57.814 fused_ordering(603) 00:13:57.814 fused_ordering(604) 00:13:57.814 fused_ordering(605) 00:13:57.814 fused_ordering(606) 00:13:57.814 fused_ordering(607) 00:13:57.814 fused_ordering(608) 00:13:57.814 fused_ordering(609) 00:13:57.814 fused_ordering(610) 00:13:57.814 fused_ordering(611) 00:13:57.814 fused_ordering(612) 00:13:57.815 fused_ordering(613) 00:13:57.815 fused_ordering(614) 00:13:57.815 fused_ordering(615) 00:13:58.387 fused_ordering(616) 00:13:58.387 fused_ordering(617) 00:13:58.387 fused_ordering(618) 00:13:58.387 fused_ordering(619) 00:13:58.387 fused_ordering(620) 00:13:58.387 fused_ordering(621) 00:13:58.387 fused_ordering(622) 00:13:58.387 fused_ordering(623) 00:13:58.387 fused_ordering(624) 00:13:58.387 fused_ordering(625) 00:13:58.387 fused_ordering(626) 00:13:58.387 fused_ordering(627) 00:13:58.387 fused_ordering(628) 00:13:58.387 fused_ordering(629) 00:13:58.387 fused_ordering(630) 00:13:58.387 fused_ordering(631) 00:13:58.387 fused_ordering(632) 00:13:58.387 fused_ordering(633) 00:13:58.387 fused_ordering(634) 00:13:58.387 fused_ordering(635) 00:13:58.387 fused_ordering(636) 00:13:58.387 fused_ordering(637) 00:13:58.387 fused_ordering(638) 00:13:58.387 fused_ordering(639) 00:13:58.387 fused_ordering(640) 00:13:58.387 fused_ordering(641) 00:13:58.387 fused_ordering(642) 00:13:58.387 fused_ordering(643) 00:13:58.387 fused_ordering(644) 00:13:58.387 fused_ordering(645) 00:13:58.387 fused_ordering(646) 00:13:58.387 fused_ordering(647) 00:13:58.387 fused_ordering(648) 00:13:58.387 fused_ordering(649) 00:13:58.387 fused_ordering(650) 00:13:58.387 fused_ordering(651) 00:13:58.387 fused_ordering(652) 00:13:58.387 fused_ordering(653) 00:13:58.387 fused_ordering(654) 00:13:58.387 fused_ordering(655) 00:13:58.387 fused_ordering(656) 00:13:58.387 fused_ordering(657) 00:13:58.387 fused_ordering(658) 00:13:58.387 fused_ordering(659) 00:13:58.387 fused_ordering(660) 00:13:58.387 fused_ordering(661) 00:13:58.387 fused_ordering(662) 00:13:58.387 fused_ordering(663) 00:13:58.387 fused_ordering(664) 00:13:58.387 fused_ordering(665) 00:13:58.387 fused_ordering(666) 00:13:58.387 fused_ordering(667) 00:13:58.387 fused_ordering(668) 00:13:58.387 fused_ordering(669) 00:13:58.387 fused_ordering(670) 00:13:58.387 fused_ordering(671) 00:13:58.387 fused_ordering(672) 00:13:58.387 fused_ordering(673) 00:13:58.387 fused_ordering(674) 00:13:58.387 fused_ordering(675) 00:13:58.387 fused_ordering(676) 00:13:58.387 fused_ordering(677) 00:13:58.387 fused_ordering(678) 00:13:58.387 fused_ordering(679) 00:13:58.387 fused_ordering(680) 00:13:58.387 fused_ordering(681) 00:13:58.387 fused_ordering(682) 00:13:58.387 fused_ordering(683) 00:13:58.387 fused_ordering(684) 00:13:58.387 fused_ordering(685) 00:13:58.387 fused_ordering(686) 00:13:58.387 fused_ordering(687) 00:13:58.387 fused_ordering(688) 00:13:58.387 fused_ordering(689) 00:13:58.387 fused_ordering(690) 00:13:58.387 fused_ordering(691) 00:13:58.387 fused_ordering(692) 00:13:58.387 fused_ordering(693) 00:13:58.387 fused_ordering(694) 00:13:58.387 fused_ordering(695) 00:13:58.387 fused_ordering(696) 00:13:58.387 fused_ordering(697) 00:13:58.387 fused_ordering(698) 00:13:58.387 fused_ordering(699) 00:13:58.387 fused_ordering(700) 00:13:58.387 fused_ordering(701) 00:13:58.387 fused_ordering(702) 00:13:58.387 fused_ordering(703) 00:13:58.387 fused_ordering(704) 00:13:58.387 fused_ordering(705) 00:13:58.387 fused_ordering(706) 00:13:58.387 fused_ordering(707) 00:13:58.387 fused_ordering(708) 00:13:58.387 fused_ordering(709) 00:13:58.387 fused_ordering(710) 00:13:58.387 fused_ordering(711) 00:13:58.387 fused_ordering(712) 00:13:58.387 fused_ordering(713) 00:13:58.387 fused_ordering(714) 00:13:58.387 fused_ordering(715) 00:13:58.387 fused_ordering(716) 00:13:58.387 fused_ordering(717) 00:13:58.387 fused_ordering(718) 00:13:58.387 fused_ordering(719) 00:13:58.387 fused_ordering(720) 00:13:58.387 fused_ordering(721) 00:13:58.387 fused_ordering(722) 00:13:58.387 fused_ordering(723) 00:13:58.387 fused_ordering(724) 00:13:58.387 fused_ordering(725) 00:13:58.387 fused_ordering(726) 00:13:58.387 fused_ordering(727) 00:13:58.387 fused_ordering(728) 00:13:58.387 fused_ordering(729) 00:13:58.387 fused_ordering(730) 00:13:58.387 fused_ordering(731) 00:13:58.387 fused_ordering(732) 00:13:58.387 fused_ordering(733) 00:13:58.387 fused_ordering(734) 00:13:58.387 fused_ordering(735) 00:13:58.387 fused_ordering(736) 00:13:58.387 fused_ordering(737) 00:13:58.387 fused_ordering(738) 00:13:58.387 fused_ordering(739) 00:13:58.387 fused_ordering(740) 00:13:58.387 fused_ordering(741) 00:13:58.387 fused_ordering(742) 00:13:58.387 fused_ordering(743) 00:13:58.387 fused_ordering(744) 00:13:58.387 fused_ordering(745) 00:13:58.387 fused_ordering(746) 00:13:58.387 fused_ordering(747) 00:13:58.387 fused_ordering(748) 00:13:58.387 fused_ordering(749) 00:13:58.387 fused_ordering(750) 00:13:58.387 fused_ordering(751) 00:13:58.387 fused_ordering(752) 00:13:58.387 fused_ordering(753) 00:13:58.387 fused_ordering(754) 00:13:58.387 fused_ordering(755) 00:13:58.387 fused_ordering(756) 00:13:58.387 fused_ordering(757) 00:13:58.387 fused_ordering(758) 00:13:58.387 fused_ordering(759) 00:13:58.387 fused_ordering(760) 00:13:58.387 fused_ordering(761) 00:13:58.387 fused_ordering(762) 00:13:58.387 fused_ordering(763) 00:13:58.387 fused_ordering(764) 00:13:58.387 fused_ordering(765) 00:13:58.387 fused_ordering(766) 00:13:58.387 fused_ordering(767) 00:13:58.387 fused_ordering(768) 00:13:58.387 fused_ordering(769) 00:13:58.387 fused_ordering(770) 00:13:58.387 fused_ordering(771) 00:13:58.387 fused_ordering(772) 00:13:58.387 fused_ordering(773) 00:13:58.387 fused_ordering(774) 00:13:58.387 fused_ordering(775) 00:13:58.387 fused_ordering(776) 00:13:58.387 fused_ordering(777) 00:13:58.387 fused_ordering(778) 00:13:58.387 fused_ordering(779) 00:13:58.387 fused_ordering(780) 00:13:58.387 fused_ordering(781) 00:13:58.387 fused_ordering(782) 00:13:58.387 fused_ordering(783) 00:13:58.387 fused_ordering(784) 00:13:58.387 fused_ordering(785) 00:13:58.387 fused_ordering(786) 00:13:58.387 fused_ordering(787) 00:13:58.387 fused_ordering(788) 00:13:58.387 fused_ordering(789) 00:13:58.387 fused_ordering(790) 00:13:58.387 fused_ordering(791) 00:13:58.387 fused_ordering(792) 00:13:58.387 fused_ordering(793) 00:13:58.387 fused_ordering(794) 00:13:58.387 fused_ordering(795) 00:13:58.387 fused_ordering(796) 00:13:58.387 fused_ordering(797) 00:13:58.387 fused_ordering(798) 00:13:58.387 fused_ordering(799) 00:13:58.387 fused_ordering(800) 00:13:58.387 fused_ordering(801) 00:13:58.387 fused_ordering(802) 00:13:58.387 fused_ordering(803) 00:13:58.387 fused_ordering(804) 00:13:58.387 fused_ordering(805) 00:13:58.387 fused_ordering(806) 00:13:58.387 fused_ordering(807) 00:13:58.387 fused_ordering(808) 00:13:58.387 fused_ordering(809) 00:13:58.387 fused_ordering(810) 00:13:58.387 fused_ordering(811) 00:13:58.387 fused_ordering(812) 00:13:58.387 fused_ordering(813) 00:13:58.387 fused_ordering(814) 00:13:58.387 fused_ordering(815) 00:13:58.387 fused_ordering(816) 00:13:58.387 fused_ordering(817) 00:13:58.387 fused_ordering(818) 00:13:58.387 fused_ordering(819) 00:13:58.387 fused_ordering(820) 00:13:58.959 fused_ordering(821) 00:13:58.959 fused_ordering(822) 00:13:58.959 fused_ordering(823) 00:13:58.959 fused_ordering(824) 00:13:58.959 fused_ordering(825) 00:13:58.959 fused_ordering(826) 00:13:58.959 fused_ordering(827) 00:13:58.959 fused_ordering(828) 00:13:58.959 fused_ordering(829) 00:13:58.959 fused_ordering(830) 00:13:58.959 fused_ordering(831) 00:13:58.959 fused_ordering(832) 00:13:58.959 fused_ordering(833) 00:13:58.959 fused_ordering(834) 00:13:58.959 fused_ordering(835) 00:13:58.959 fused_ordering(836) 00:13:58.959 fused_ordering(837) 00:13:58.959 fused_ordering(838) 00:13:58.959 fused_ordering(839) 00:13:58.959 fused_ordering(840) 00:13:58.959 fused_ordering(841) 00:13:58.959 fused_ordering(842) 00:13:58.959 fused_ordering(843) 00:13:58.959 fused_ordering(844) 00:13:58.959 fused_ordering(845) 00:13:58.959 fused_ordering(846) 00:13:58.959 fused_ordering(847) 00:13:58.959 fused_ordering(848) 00:13:58.959 fused_ordering(849) 00:13:58.959 fused_ordering(850) 00:13:58.959 fused_ordering(851) 00:13:58.959 fused_ordering(852) 00:13:58.959 fused_ordering(853) 00:13:58.959 fused_ordering(854) 00:13:58.959 fused_ordering(855) 00:13:58.959 fused_ordering(856) 00:13:58.959 fused_ordering(857) 00:13:58.959 fused_ordering(858) 00:13:58.959 fused_ordering(859) 00:13:58.959 fused_ordering(860) 00:13:58.959 fused_ordering(861) 00:13:58.959 fused_ordering(862) 00:13:58.959 fused_ordering(863) 00:13:58.959 fused_ordering(864) 00:13:58.959 fused_ordering(865) 00:13:58.959 fused_ordering(866) 00:13:58.959 fused_ordering(867) 00:13:58.959 fused_ordering(868) 00:13:58.959 fused_ordering(869) 00:13:58.959 fused_ordering(870) 00:13:58.959 fused_ordering(871) 00:13:58.959 fused_ordering(872) 00:13:58.959 fused_ordering(873) 00:13:58.959 fused_ordering(874) 00:13:58.959 fused_ordering(875) 00:13:58.959 fused_ordering(876) 00:13:58.959 fused_ordering(877) 00:13:58.959 fused_ordering(878) 00:13:58.959 fused_ordering(879) 00:13:58.959 fused_ordering(880) 00:13:58.959 fused_ordering(881) 00:13:58.959 fused_ordering(882) 00:13:58.959 fused_ordering(883) 00:13:58.959 fused_ordering(884) 00:13:58.959 fused_ordering(885) 00:13:58.959 fused_ordering(886) 00:13:58.959 fused_ordering(887) 00:13:58.959 fused_ordering(888) 00:13:58.959 fused_ordering(889) 00:13:58.959 fused_ordering(890) 00:13:58.959 fused_ordering(891) 00:13:58.959 fused_ordering(892) 00:13:58.959 fused_ordering(893) 00:13:58.959 fused_ordering(894) 00:13:58.959 fused_ordering(895) 00:13:58.959 fused_ordering(896) 00:13:58.959 fused_ordering(897) 00:13:58.959 fused_ordering(898) 00:13:58.959 fused_ordering(899) 00:13:58.959 fused_ordering(900) 00:13:58.959 fused_ordering(901) 00:13:58.959 fused_ordering(902) 00:13:58.959 fused_ordering(903) 00:13:58.959 fused_ordering(904) 00:13:58.959 fused_ordering(905) 00:13:58.959 fused_ordering(906) 00:13:58.959 fused_ordering(907) 00:13:58.959 fused_ordering(908) 00:13:58.959 fused_ordering(909) 00:13:58.959 fused_ordering(910) 00:13:58.959 fused_ordering(911) 00:13:58.959 fused_ordering(912) 00:13:58.959 fused_ordering(913) 00:13:58.959 fused_ordering(914) 00:13:58.959 fused_ordering(915) 00:13:58.959 fused_ordering(916) 00:13:58.959 fused_ordering(917) 00:13:58.959 fused_ordering(918) 00:13:58.959 fused_ordering(919) 00:13:58.959 fused_ordering(920) 00:13:58.959 fused_ordering(921) 00:13:58.959 fused_ordering(922) 00:13:58.959 fused_ordering(923) 00:13:58.959 fused_ordering(924) 00:13:58.959 fused_ordering(925) 00:13:58.959 fused_ordering(926) 00:13:58.959 fused_ordering(927) 00:13:58.959 fused_ordering(928) 00:13:58.959 fused_ordering(929) 00:13:58.959 fused_ordering(930) 00:13:58.959 fused_ordering(931) 00:13:58.959 fused_ordering(932) 00:13:58.959 fused_ordering(933) 00:13:58.959 fused_ordering(934) 00:13:58.959 fused_ordering(935) 00:13:58.959 fused_ordering(936) 00:13:58.959 fused_ordering(937) 00:13:58.959 fused_ordering(938) 00:13:58.959 fused_ordering(939) 00:13:58.959 fused_ordering(940) 00:13:58.959 fused_ordering(941) 00:13:58.959 fused_ordering(942) 00:13:58.959 fused_ordering(943) 00:13:58.959 fused_ordering(944) 00:13:58.959 fused_ordering(945) 00:13:58.959 fused_ordering(946) 00:13:58.959 fused_ordering(947) 00:13:58.959 fused_ordering(948) 00:13:58.959 fused_ordering(949) 00:13:58.959 fused_ordering(950) 00:13:58.959 fused_ordering(951) 00:13:58.959 fused_ordering(952) 00:13:58.959 fused_ordering(953) 00:13:58.959 fused_ordering(954) 00:13:58.959 fused_ordering(955) 00:13:58.959 fused_ordering(956) 00:13:58.959 fused_ordering(957) 00:13:58.959 fused_ordering(958) 00:13:58.959 fused_ordering(959) 00:13:58.959 fused_ordering(960) 00:13:58.959 fused_ordering(961) 00:13:58.959 fused_ordering(962) 00:13:58.959 fused_ordering(963) 00:13:58.959 fused_ordering(964) 00:13:58.959 fused_ordering(965) 00:13:58.959 fused_ordering(966) 00:13:58.960 fused_ordering(967) 00:13:58.960 fused_ordering(968) 00:13:58.960 fused_ordering(969) 00:13:58.960 fused_ordering(970) 00:13:58.960 fused_ordering(971) 00:13:58.960 fused_ordering(972) 00:13:58.960 fused_ordering(973) 00:13:58.960 fused_ordering(974) 00:13:58.960 fused_ordering(975) 00:13:58.960 fused_ordering(976) 00:13:58.960 fused_ordering(977) 00:13:58.960 fused_ordering(978) 00:13:58.960 fused_ordering(979) 00:13:58.960 fused_ordering(980) 00:13:58.960 fused_ordering(981) 00:13:58.960 fused_ordering(982) 00:13:58.960 fused_ordering(983) 00:13:58.960 fused_ordering(984) 00:13:58.960 fused_ordering(985) 00:13:58.960 fused_ordering(986) 00:13:58.960 fused_ordering(987) 00:13:58.960 fused_ordering(988) 00:13:58.960 fused_ordering(989) 00:13:58.960 fused_ordering(990) 00:13:58.960 fused_ordering(991) 00:13:58.960 fused_ordering(992) 00:13:58.960 fused_ordering(993) 00:13:58.960 fused_ordering(994) 00:13:58.960 fused_ordering(995) 00:13:58.960 fused_ordering(996) 00:13:58.960 fused_ordering(997) 00:13:58.960 fused_ordering(998) 00:13:58.960 fused_ordering(999) 00:13:58.960 fused_ordering(1000) 00:13:58.960 fused_ordering(1001) 00:13:58.960 fused_ordering(1002) 00:13:58.960 fused_ordering(1003) 00:13:58.960 fused_ordering(1004) 00:13:58.960 fused_ordering(1005) 00:13:58.960 fused_ordering(1006) 00:13:58.960 fused_ordering(1007) 00:13:58.960 fused_ordering(1008) 00:13:58.960 fused_ordering(1009) 00:13:58.960 fused_ordering(1010) 00:13:58.960 fused_ordering(1011) 00:13:58.960 fused_ordering(1012) 00:13:58.960 fused_ordering(1013) 00:13:58.960 fused_ordering(1014) 00:13:58.960 fused_ordering(1015) 00:13:58.960 fused_ordering(1016) 00:13:58.960 fused_ordering(1017) 00:13:58.960 fused_ordering(1018) 00:13:58.960 fused_ordering(1019) 00:13:58.960 fused_ordering(1020) 00:13:58.960 fused_ordering(1021) 00:13:58.960 fused_ordering(1022) 00:13:58.960 fused_ordering(1023) 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.960 rmmod nvme_tcp 00:13:58.960 rmmod nvme_fabrics 00:13:58.960 rmmod nvme_keyring 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:58.960 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1863719 ']' 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1863719 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1863719 ']' 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1863719 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1863719 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1863719' 00:13:59.221 killing process with pid 1863719 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1863719 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1863719 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.221 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.765 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.765 00:14:01.765 real 0m13.672s 00:14:01.765 user 0m7.161s 00:14:01.765 sys 0m7.338s 00:14:01.765 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.765 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.765 ************************************ 00:14:01.765 END TEST nvmf_fused_ordering 00:14:01.765 ************************************ 00:14:01.765 11:51:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:01.765 11:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:01.765 11:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.765 11:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.765 ************************************ 00:14:01.765 START TEST nvmf_ns_masking 00:14:01.765 ************************************ 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:01.765 * Looking for test storage... 00:14:01.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:01.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.765 --rc genhtml_branch_coverage=1 00:14:01.765 --rc genhtml_function_coverage=1 00:14:01.765 --rc genhtml_legend=1 00:14:01.765 --rc geninfo_all_blocks=1 00:14:01.765 --rc geninfo_unexecuted_blocks=1 00:14:01.765 00:14:01.765 ' 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:01.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.765 --rc genhtml_branch_coverage=1 00:14:01.765 --rc genhtml_function_coverage=1 00:14:01.765 --rc genhtml_legend=1 00:14:01.765 --rc geninfo_all_blocks=1 00:14:01.765 --rc geninfo_unexecuted_blocks=1 00:14:01.765 00:14:01.765 ' 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:01.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.765 --rc genhtml_branch_coverage=1 00:14:01.765 --rc genhtml_function_coverage=1 00:14:01.765 --rc genhtml_legend=1 00:14:01.765 --rc geninfo_all_blocks=1 00:14:01.765 --rc geninfo_unexecuted_blocks=1 00:14:01.765 00:14:01.765 ' 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:01.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.765 --rc genhtml_branch_coverage=1 00:14:01.765 --rc genhtml_function_coverage=1 00:14:01.765 --rc genhtml_legend=1 00:14:01.765 --rc geninfo_all_blocks=1 00:14:01.765 --rc geninfo_unexecuted_blocks=1 00:14:01.765 00:14:01.765 ' 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.765 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b5177bd4-da27-4e8f-93c2-08dfa476293d 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e46b27c2-ba0a-4a30-8961-5bbda649bff2 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0f0d8274-38c2-497a-8745-65c5ea40cb99 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:01.766 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.022 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:10.023 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:10.023 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:10.023 Found net devices under 0000:31:00.0: cvl_0_0 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:10.023 Found net devices under 0000:31:00.1: cvl_0_1 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:10.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:14:10.023 00:14:10.023 --- 10.0.0.2 ping statistics --- 00:14:10.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.023 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:14:10.023 00:14:10.023 --- 10.0.0.1 ping statistics --- 00:14:10.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.023 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:10.023 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1869346 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1869346 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1869346 ']' 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.023 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.023 [2024-10-11 11:51:12.090449] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:10.023 [2024-10-11 11:51:12.090516] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.023 [2024-10-11 11:51:12.180100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.023 [2024-10-11 11:51:12.231529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.023 [2024-10-11 11:51:12.231582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.023 [2024-10-11 11:51:12.231590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.023 [2024-10-11 11:51:12.231598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.023 [2024-10-11 11:51:12.231604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.023 [2024-10-11 11:51:12.232447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.284 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.284 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:10.284 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:10.284 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.284 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.284 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.284 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:10.545 [2024-10-11 11:51:13.130617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.545 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:10.545 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:10.545 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:10.805 Malloc1 00:14:10.805 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:11.065 Malloc2 00:14:11.065 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:11.065 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:11.325 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.586 [2024-10-11 11:51:14.100262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.586 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:11.586 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f0d8274-38c2-497a-8745-65c5ea40cb99 -a 10.0.0.2 -s 4420 -i 4 00:14:11.848 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.848 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:11.848 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.848 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:11.848 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.760 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.760 [ 0]:0x1 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f8cfe4aa710246b09730eb152aa34e8e 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f8cfe4aa710246b09730eb152aa34e8e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:14.020 [ 0]:0x1 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.020 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f8cfe4aa710246b09730eb152aa34e8e 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f8cfe4aa710246b09730eb152aa34e8e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:14.280 [ 1]:0x2 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=285a139178594745adc9acf6718a4093 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 285a139178594745adc9acf6718a4093 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.280 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.540 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:14.540 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:14.540 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f0d8274-38c2-497a-8745-65c5ea40cb99 -a 10.0.0.2 -s 4420 -i 4 00:14:14.800 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:14.800 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:14.800 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.800 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:14.800 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:14.800 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:16.712 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:16.712 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:16.712 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.712 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:16.712 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.712 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:16.712 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:16.712 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:16.972 [ 0]:0x2 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=285a139178594745adc9acf6718a4093 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 285a139178594745adc9acf6718a4093 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.972 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.233 [ 0]:0x1 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f8cfe4aa710246b09730eb152aa34e8e 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f8cfe4aa710246b09730eb152aa34e8e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.233 [ 1]:0x2 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=285a139178594745adc9acf6718a4093 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 285a139178594745adc9acf6718a4093 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.233 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.493 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:17.493 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:17.493 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.493 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:17.493 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.494 [ 0]:0x2 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=285a139178594745adc9acf6718a4093 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 285a139178594745adc9acf6718a4093 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.494 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.754 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:17.754 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0f0d8274-38c2-497a-8745-65c5ea40cb99 -a 10.0.0.2 -s 4420 -i 4 00:14:18.015 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:18.015 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:18.015 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.015 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:18.015 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:18.015 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:19.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:19.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:19.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:19.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:19.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:19.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.187 [ 0]:0x1 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f8cfe4aa710246b09730eb152aa34e8e 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f8cfe4aa710246b09730eb152aa34e8e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.187 [ 1]:0x2 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.187 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.448 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=285a139178594745adc9acf6718a4093 00:14:20.448 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 285a139178594745adc9acf6718a4093 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.448 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.448 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.708 [ 0]:0x2 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=285a139178594745adc9acf6718a4093 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 285a139178594745adc9acf6718a4093 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.708 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.709 [2024-10-11 11:51:23.373790] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:20.709 request: 00:14:20.709 { 00:14:20.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.709 "nsid": 2, 00:14:20.709 "host": "nqn.2016-06.io.spdk:host1", 00:14:20.709 "method": "nvmf_ns_remove_host", 00:14:20.709 "req_id": 1 00:14:20.709 } 00:14:20.709 Got JSON-RPC error response 00:14:20.709 response: 00:14:20.709 { 00:14:20.709 "code": -32602, 00:14:20.709 "message": "Invalid parameters" 00:14:20.709 } 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.709 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.970 [ 0]:0x2 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=285a139178594745adc9acf6718a4093 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 285a139178594745adc9acf6718a4093 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1871843 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1871843 /var/tmp/host.sock 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1871843 ']' 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:20.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.970 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.970 [2024-10-11 11:51:23.637200] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:20.970 [2024-10-11 11:51:23.637251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871843 ] 00:14:21.230 [2024-10-11 11:51:23.718172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.230 [2024-10-11 11:51:23.754014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.801 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.801 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:21.801 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.061 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.322 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b5177bd4-da27-4e8f-93c2-08dfa476293d 00:14:22.322 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:22.322 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B5177BD4DA274E8F93C208DFA476293D -i 00:14:22.322 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e46b27c2-ba0a-4a30-8961-5bbda649bff2 00:14:22.322 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:22.322 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E46B27C2BA0A4A3089615BBDA649BFF2 -i 00:14:22.582 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.841 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:23.101 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:23.101 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:23.101 nvme0n1 00:14:23.101 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:23.101 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:23.670 nvme1n2 00:14:23.670 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:23.670 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:23.670 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:23.670 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:23.670 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:23.930 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:23.930 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:23.930 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:23.930 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:23.930 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b5177bd4-da27-4e8f-93c2-08dfa476293d == \b\5\1\7\7\b\d\4\-\d\a\2\7\-\4\e\8\f\-\9\3\c\2\-\0\8\d\f\a\4\7\6\2\9\3\d ]] 00:14:23.930 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:23.930 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:23.930 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e46b27c2-ba0a-4a30-8961-5bbda649bff2 == \e\4\6\b\2\7\c\2\-\b\a\0\a\-\4\a\3\0\-\8\9\6\1\-\5\b\b\d\a\6\4\9\b\f\f\2 ]] 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1871843 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1871843 ']' 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1871843 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1871843 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1871843' 00:14:24.191 killing process with pid 1871843 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1871843 00:14:24.191 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1871843 00:14:24.451 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.711 rmmod nvme_tcp 00:14:24.711 rmmod nvme_fabrics 00:14:24.711 rmmod nvme_keyring 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1869346 ']' 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1869346 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1869346 ']' 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1869346 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1869346 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1869346' 00:14:24.711 killing process with pid 1869346 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1869346 00:14:24.711 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1869346 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.971 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.883 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:26.883 00:14:26.883 real 0m25.487s 00:14:26.883 user 0m25.661s 00:14:26.883 sys 0m8.095s 00:14:26.883 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.883 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.883 ************************************ 00:14:26.883 END TEST nvmf_ns_masking 00:14:26.883 ************************************ 00:14:26.883 11:51:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:26.883 11:51:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:26.883 11:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:26.883 11:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:26.883 11:51:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.144 ************************************ 00:14:27.144 START TEST nvmf_nvme_cli 00:14:27.144 ************************************ 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:27.144 * Looking for test storage... 00:14:27.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:27.144 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:27.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.145 --rc genhtml_branch_coverage=1 00:14:27.145 --rc genhtml_function_coverage=1 00:14:27.145 --rc genhtml_legend=1 00:14:27.145 --rc geninfo_all_blocks=1 00:14:27.145 --rc geninfo_unexecuted_blocks=1 00:14:27.145 00:14:27.145 ' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:27.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.145 --rc genhtml_branch_coverage=1 00:14:27.145 --rc genhtml_function_coverage=1 00:14:27.145 --rc genhtml_legend=1 00:14:27.145 --rc geninfo_all_blocks=1 00:14:27.145 --rc geninfo_unexecuted_blocks=1 00:14:27.145 00:14:27.145 ' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:27.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.145 --rc genhtml_branch_coverage=1 00:14:27.145 --rc genhtml_function_coverage=1 00:14:27.145 --rc genhtml_legend=1 00:14:27.145 --rc geninfo_all_blocks=1 00:14:27.145 --rc geninfo_unexecuted_blocks=1 00:14:27.145 00:14:27.145 ' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:27.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.145 --rc genhtml_branch_coverage=1 00:14:27.145 --rc genhtml_function_coverage=1 00:14:27.145 --rc genhtml_legend=1 00:14:27.145 --rc geninfo_all_blocks=1 00:14:27.145 --rc geninfo_unexecuted_blocks=1 00:14:27.145 00:14:27.145 ' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.145 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.406 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:27.406 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:27.406 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:27.406 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:35.551 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:35.551 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:35.551 Found net devices under 0000:31:00.0: cvl_0_0 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:35.551 Found net devices under 0000:31:00.1: cvl_0_1 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:35.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:14:35.551 00:14:35.551 --- 10.0.0.2 ping statistics --- 00:14:35.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.551 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:14:35.551 00:14:35.551 --- 10.0.0.1 ping statistics --- 00:14:35.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.551 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.551 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1876891 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1876891 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1876891 ']' 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.552 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.552 [2024-10-11 11:51:37.619740] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:35.552 [2024-10-11 11:51:37.619809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.552 [2024-10-11 11:51:37.711670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.552 [2024-10-11 11:51:37.765715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.552 [2024-10-11 11:51:37.765765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.552 [2024-10-11 11:51:37.765773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.552 [2024-10-11 11:51:37.765780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.552 [2024-10-11 11:51:37.765787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.552 [2024-10-11 11:51:37.768026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.552 [2024-10-11 11:51:37.768186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.552 [2024-10-11 11:51:37.768238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.552 [2024-10-11 11:51:37.768239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.813 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.813 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:35.813 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:35.813 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:35.813 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.813 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.814 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.814 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.814 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.814 [2024-10-11 11:51:38.501270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.814 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.814 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:35.814 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.814 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.075 Malloc0 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.075 Malloc1 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.075 [2024-10-11 11:51:38.612949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:14:36.075 00:14:36.075 Discovery Log Number of Records 2, Generation counter 2 00:14:36.075 =====Discovery Log Entry 0====== 00:14:36.075 trtype: tcp 00:14:36.075 adrfam: ipv4 00:14:36.075 subtype: current discovery subsystem 00:14:36.075 treq: not required 00:14:36.075 portid: 0 00:14:36.075 trsvcid: 4420 00:14:36.075 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:36.075 traddr: 10.0.0.2 00:14:36.075 eflags: explicit discovery connections, duplicate discovery information 00:14:36.075 sectype: none 00:14:36.075 =====Discovery Log Entry 1====== 00:14:36.075 trtype: tcp 00:14:36.075 adrfam: ipv4 00:14:36.075 subtype: nvme subsystem 00:14:36.075 treq: not required 00:14:36.075 portid: 0 00:14:36.075 trsvcid: 4420 00:14:36.075 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:36.075 traddr: 10.0.0.2 00:14:36.075 eflags: none 00:14:36.075 sectype: none 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:36.075 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:36.076 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:36.337 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:36.337 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:36.337 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:36.337 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:36.337 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:36.337 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:36.337 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:36.337 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:37.722 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:37.722 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:37.722 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.722 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:37.722 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:37.722 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:39.634 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:39.635 /dev/nvme0n2 ]] 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.635 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.896 rmmod nvme_tcp 00:14:39.896 rmmod nvme_fabrics 00:14:39.896 rmmod nvme_keyring 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1876891 ']' 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1876891 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1876891 ']' 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1876891 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.896 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1876891 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1876891' 00:14:40.157 killing process with pid 1876891 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1876891 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1876891 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.157 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.704 00:14:42.704 real 0m15.244s 00:14:42.704 user 0m22.218s 00:14:42.704 sys 0m6.583s 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 ************************************ 00:14:42.704 END TEST nvmf_nvme_cli 00:14:42.704 ************************************ 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.704 ************************************ 00:14:42.704 START TEST nvmf_vfio_user 00:14:42.704 ************************************ 00:14:42.704 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:42.704 * Looking for test storage... 00:14:42.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:42.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.704 --rc genhtml_branch_coverage=1 00:14:42.704 --rc genhtml_function_coverage=1 00:14:42.704 --rc genhtml_legend=1 00:14:42.704 --rc geninfo_all_blocks=1 00:14:42.704 --rc geninfo_unexecuted_blocks=1 00:14:42.704 00:14:42.704 ' 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:42.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.704 --rc genhtml_branch_coverage=1 00:14:42.704 --rc genhtml_function_coverage=1 00:14:42.704 --rc genhtml_legend=1 00:14:42.704 --rc geninfo_all_blocks=1 00:14:42.704 --rc geninfo_unexecuted_blocks=1 00:14:42.704 00:14:42.704 ' 00:14:42.704 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.705 --rc genhtml_branch_coverage=1 00:14:42.705 --rc genhtml_function_coverage=1 00:14:42.705 --rc genhtml_legend=1 00:14:42.705 --rc geninfo_all_blocks=1 00:14:42.705 --rc geninfo_unexecuted_blocks=1 00:14:42.705 00:14:42.705 ' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.705 --rc genhtml_branch_coverage=1 00:14:42.705 --rc genhtml_function_coverage=1 00:14:42.705 --rc genhtml_legend=1 00:14:42.705 --rc geninfo_all_blocks=1 00:14:42.705 --rc geninfo_unexecuted_blocks=1 00:14:42.705 00:14:42.705 ' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1878434 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1878434' 00:14:42.705 Process pid: 1878434 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1878434 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1878434 ']' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.705 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:42.705 [2024-10-11 11:51:45.227175] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:42.705 [2024-10-11 11:51:45.227234] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.705 [2024-10-11 11:51:45.311018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.705 [2024-10-11 11:51:45.346138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.705 [2024-10-11 11:51:45.346168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.705 [2024-10-11 11:51:45.346173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.705 [2024-10-11 11:51:45.346178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.705 [2024-10-11 11:51:45.346182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.705 [2024-10-11 11:51:45.347559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.705 [2024-10-11 11:51:45.347719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.705 [2024-10-11 11:51:45.347867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.705 [2024-10-11 11:51:45.347869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.648 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.648 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:43.648 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:44.599 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:44.599 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:44.599 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:44.599 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:44.599 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:44.599 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:44.859 Malloc1 00:14:44.859 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:45.121 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:45.121 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:45.382 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:45.382 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:45.382 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:45.643 Malloc2 00:14:45.643 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:45.903 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:45.903 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:46.166 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:46.166 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:46.166 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:46.166 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:46.166 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:46.166 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:46.166 [2024-10-11 11:51:48.737993] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:46.166 [2024-10-11 11:51:48.738036] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879126 ] 00:14:46.166 [2024-10-11 11:51:48.766152] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:46.166 [2024-10-11 11:51:48.774347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.166 [2024-10-11 11:51:48.774362] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f725bfdf000 00:14:46.166 [2024-10-11 11:51:48.775346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.166 [2024-10-11 11:51:48.776348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.166 [2024-10-11 11:51:48.777351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.166 [2024-10-11 11:51:48.778356] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.166 [2024-10-11 11:51:48.779357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.166 [2024-10-11 11:51:48.780364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.166 [2024-10-11 11:51:48.781370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.166 [2024-10-11 11:51:48.782377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.166 [2024-10-11 11:51:48.783381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.166 [2024-10-11 11:51:48.783390] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f725bfd4000 00:14:46.166 [2024-10-11 11:51:48.784306] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.166 [2024-10-11 11:51:48.793741] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:46.166 [2024-10-11 11:51:48.793761] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:46.166 [2024-10-11 11:51:48.798463] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:46.166 [2024-10-11 11:51:48.798497] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:46.166 [2024-10-11 11:51:48.798559] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:46.166 [2024-10-11 11:51:48.798574] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:46.166 [2024-10-11 11:51:48.798578] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:46.166 [2024-10-11 11:51:48.801067] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:46.166 [2024-10-11 11:51:48.801074] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:46.166 [2024-10-11 11:51:48.801079] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:46.166 [2024-10-11 11:51:48.801469] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:46.166 [2024-10-11 11:51:48.801475] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:46.166 [2024-10-11 11:51:48.801480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:46.166 [2024-10-11 11:51:48.802469] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:46.166 [2024-10-11 11:51:48.802476] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:46.166 [2024-10-11 11:51:48.803476] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:46.166 [2024-10-11 11:51:48.803482] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:46.166 [2024-10-11 11:51:48.803488] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:46.166 [2024-10-11 11:51:48.803493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:46.166 [2024-10-11 11:51:48.803597] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:46.166 [2024-10-11 11:51:48.803600] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:46.166 [2024-10-11 11:51:48.803605] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:46.166 [2024-10-11 11:51:48.804484] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:46.166 [2024-10-11 11:51:48.805489] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:46.166 [2024-10-11 11:51:48.806494] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:46.166 [2024-10-11 11:51:48.807493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.166 [2024-10-11 11:51:48.807544] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:46.166 [2024-10-11 11:51:48.808505] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:46.166 [2024-10-11 11:51:48.808511] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:46.166 [2024-10-11 11:51:48.808515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:46.166 [2024-10-11 11:51:48.808530] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:46.166 [2024-10-11 11:51:48.808539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808599] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.167 [2024-10-11 11:51:48.808604] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.167 [2024-10-11 11:51:48.808606] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.167 [2024-10-11 11:51:48.808617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.808660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.808667] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:46.167 [2024-10-11 11:51:48.808670] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:46.167 [2024-10-11 11:51:48.808673] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:46.167 [2024-10-11 11:51:48.808677] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:46.167 [2024-10-11 11:51:48.808680] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:46.167 [2024-10-11 11:51:48.808684] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:46.167 [2024-10-11 11:51:48.808689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.808712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.808721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.167 [2024-10-11 11:51:48.808727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.167 [2024-10-11 11:51:48.808733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.167 [2024-10-11 11:51:48.808739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.167 [2024-10-11 11:51:48.808742] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808749] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.808767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.808771] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:46.167 [2024-10-11 11:51:48.808775] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808779] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808786] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.808805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.808849] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808854] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808860] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:46.167 [2024-10-11 11:51:48.808863] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:46.167 [2024-10-11 11:51:48.808866] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.167 [2024-10-11 11:51:48.808870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.808880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.808887] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:46.167 [2024-10-11 11:51:48.808893] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808904] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.167 [2024-10-11 11:51:48.808907] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.167 [2024-10-11 11:51:48.808909] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.167 [2024-10-11 11:51:48.808914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.808940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.808950] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808955] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808960] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.167 [2024-10-11 11:51:48.808963] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.167 [2024-10-11 11:51:48.808966] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.167 [2024-10-11 11:51:48.808970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.808979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.808986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808990] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.808996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.809001] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.809004] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.809009] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.809012] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:46.167 [2024-10-11 11:51:48.809015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:46.167 [2024-10-11 11:51:48.809019] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:46.167 [2024-10-11 11:51:48.809034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.809042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.809051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.809059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.809070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.809079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.809087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.809097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.809107] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:46.167 [2024-10-11 11:51:48.809110] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:46.167 [2024-10-11 11:51:48.809113] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:46.167 [2024-10-11 11:51:48.809115] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:46.167 [2024-10-11 11:51:48.809118] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:46.167 [2024-10-11 11:51:48.809122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:46.167 [2024-10-11 11:51:48.809128] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:46.167 [2024-10-11 11:51:48.809131] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:46.167 [2024-10-11 11:51:48.809133] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.167 [2024-10-11 11:51:48.809138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.809143] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:46.167 [2024-10-11 11:51:48.809146] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.167 [2024-10-11 11:51:48.809148] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.167 [2024-10-11 11:51:48.809153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.809158] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:46.167 [2024-10-11 11:51:48.809161] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:46.167 [2024-10-11 11:51:48.809164] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.167 [2024-10-11 11:51:48.809168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:46.167 [2024-10-11 11:51:48.809173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:46.167 [2024-10-11 11:51:48.809181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:46.168 [2024-10-11 11:51:48.809189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:46.168 [2024-10-11 11:51:48.809194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:46.168 ===================================================== 00:14:46.168 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.168 ===================================================== 00:14:46.168 Controller Capabilities/Features 00:14:46.168 ================================ 00:14:46.168 Vendor ID: 4e58 00:14:46.168 Subsystem Vendor ID: 4e58 00:14:46.168 Serial Number: SPDK1 00:14:46.168 Model Number: SPDK bdev Controller 00:14:46.168 Firmware Version: 25.01 00:14:46.168 Recommended Arb Burst: 6 00:14:46.168 IEEE OUI Identifier: 8d 6b 50 00:14:46.168 Multi-path I/O 00:14:46.168 May have multiple subsystem ports: Yes 00:14:46.168 May have multiple controllers: Yes 00:14:46.168 Associated with SR-IOV VF: No 00:14:46.168 Max Data Transfer Size: 131072 00:14:46.168 Max Number of Namespaces: 32 00:14:46.168 Max Number of I/O Queues: 127 00:14:46.168 NVMe Specification Version (VS): 1.3 00:14:46.168 NVMe Specification Version (Identify): 1.3 00:14:46.168 Maximum Queue Entries: 256 00:14:46.168 Contiguous Queues Required: Yes 00:14:46.168 Arbitration Mechanisms Supported 00:14:46.168 Weighted Round Robin: Not Supported 00:14:46.168 Vendor Specific: Not Supported 00:14:46.168 Reset Timeout: 15000 ms 00:14:46.168 Doorbell Stride: 4 bytes 00:14:46.168 NVM Subsystem Reset: Not Supported 00:14:46.168 Command Sets Supported 00:14:46.168 NVM Command Set: Supported 00:14:46.168 Boot Partition: Not Supported 00:14:46.168 Memory Page Size Minimum: 4096 bytes 00:14:46.168 Memory Page Size Maximum: 4096 bytes 00:14:46.168 Persistent Memory Region: Not Supported 00:14:46.168 Optional Asynchronous Events Supported 00:14:46.168 Namespace Attribute Notices: Supported 00:14:46.168 Firmware Activation Notices: Not Supported 00:14:46.168 ANA Change Notices: Not Supported 00:14:46.168 PLE Aggregate Log Change Notices: Not Supported 00:14:46.168 LBA Status Info Alert Notices: Not Supported 00:14:46.168 EGE Aggregate Log Change Notices: Not Supported 00:14:46.168 Normal NVM Subsystem Shutdown event: Not Supported 00:14:46.168 Zone Descriptor Change Notices: Not Supported 00:14:46.168 Discovery Log Change Notices: Not Supported 00:14:46.168 Controller Attributes 00:14:46.168 128-bit Host Identifier: Supported 00:14:46.168 Non-Operational Permissive Mode: Not Supported 00:14:46.168 NVM Sets: Not Supported 00:14:46.168 Read Recovery Levels: Not Supported 00:14:46.168 Endurance Groups: Not Supported 00:14:46.168 Predictable Latency Mode: Not Supported 00:14:46.168 Traffic Based Keep ALive: Not Supported 00:14:46.168 Namespace Granularity: Not Supported 00:14:46.168 SQ Associations: Not Supported 00:14:46.168 UUID List: Not Supported 00:14:46.168 Multi-Domain Subsystem: Not Supported 00:14:46.168 Fixed Capacity Management: Not Supported 00:14:46.168 Variable Capacity Management: Not Supported 00:14:46.168 Delete Endurance Group: Not Supported 00:14:46.168 Delete NVM Set: Not Supported 00:14:46.168 Extended LBA Formats Supported: Not Supported 00:14:46.168 Flexible Data Placement Supported: Not Supported 00:14:46.168 00:14:46.168 Controller Memory Buffer Support 00:14:46.168 ================================ 00:14:46.168 Supported: No 00:14:46.168 00:14:46.168 Persistent Memory Region Support 00:14:46.168 ================================ 00:14:46.168 Supported: No 00:14:46.168 00:14:46.168 Admin Command Set Attributes 00:14:46.168 ============================ 00:14:46.168 Security Send/Receive: Not Supported 00:14:46.168 Format NVM: Not Supported 00:14:46.168 Firmware Activate/Download: Not Supported 00:14:46.168 Namespace Management: Not Supported 00:14:46.168 Device Self-Test: Not Supported 00:14:46.168 Directives: Not Supported 00:14:46.168 NVMe-MI: Not Supported 00:14:46.168 Virtualization Management: Not Supported 00:14:46.168 Doorbell Buffer Config: Not Supported 00:14:46.168 Get LBA Status Capability: Not Supported 00:14:46.168 Command & Feature Lockdown Capability: Not Supported 00:14:46.168 Abort Command Limit: 4 00:14:46.168 Async Event Request Limit: 4 00:14:46.168 Number of Firmware Slots: N/A 00:14:46.168 Firmware Slot 1 Read-Only: N/A 00:14:46.168 Firmware Activation Without Reset: N/A 00:14:46.168 Multiple Update Detection Support: N/A 00:14:46.168 Firmware Update Granularity: No Information Provided 00:14:46.168 Per-Namespace SMART Log: No 00:14:46.168 Asymmetric Namespace Access Log Page: Not Supported 00:14:46.168 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:46.168 Command Effects Log Page: Supported 00:14:46.168 Get Log Page Extended Data: Supported 00:14:46.168 Telemetry Log Pages: Not Supported 00:14:46.168 Persistent Event Log Pages: Not Supported 00:14:46.168 Supported Log Pages Log Page: May Support 00:14:46.168 Commands Supported & Effects Log Page: Not Supported 00:14:46.168 Feature Identifiers & Effects Log Page:May Support 00:14:46.168 NVMe-MI Commands & Effects Log Page: May Support 00:14:46.168 Data Area 4 for Telemetry Log: Not Supported 00:14:46.168 Error Log Page Entries Supported: 128 00:14:46.168 Keep Alive: Supported 00:14:46.168 Keep Alive Granularity: 10000 ms 00:14:46.168 00:14:46.168 NVM Command Set Attributes 00:14:46.168 ========================== 00:14:46.168 Submission Queue Entry Size 00:14:46.168 Max: 64 00:14:46.168 Min: 64 00:14:46.168 Completion Queue Entry Size 00:14:46.168 Max: 16 00:14:46.168 Min: 16 00:14:46.168 Number of Namespaces: 32 00:14:46.168 Compare Command: Supported 00:14:46.168 Write Uncorrectable Command: Not Supported 00:14:46.168 Dataset Management Command: Supported 00:14:46.168 Write Zeroes Command: Supported 00:14:46.168 Set Features Save Field: Not Supported 00:14:46.168 Reservations: Not Supported 00:14:46.168 Timestamp: Not Supported 00:14:46.168 Copy: Supported 00:14:46.168 Volatile Write Cache: Present 00:14:46.168 Atomic Write Unit (Normal): 1 00:14:46.168 Atomic Write Unit (PFail): 1 00:14:46.168 Atomic Compare & Write Unit: 1 00:14:46.168 Fused Compare & Write: Supported 00:14:46.168 Scatter-Gather List 00:14:46.168 SGL Command Set: Supported (Dword aligned) 00:14:46.168 SGL Keyed: Not Supported 00:14:46.168 SGL Bit Bucket Descriptor: Not Supported 00:14:46.168 SGL Metadata Pointer: Not Supported 00:14:46.168 Oversized SGL: Not Supported 00:14:46.168 SGL Metadata Address: Not Supported 00:14:46.168 SGL Offset: Not Supported 00:14:46.168 Transport SGL Data Block: Not Supported 00:14:46.168 Replay Protected Memory Block: Not Supported 00:14:46.168 00:14:46.168 Firmware Slot Information 00:14:46.168 ========================= 00:14:46.168 Active slot: 1 00:14:46.168 Slot 1 Firmware Revision: 25.01 00:14:46.168 00:14:46.168 00:14:46.168 Commands Supported and Effects 00:14:46.168 ============================== 00:14:46.168 Admin Commands 00:14:46.168 -------------- 00:14:46.168 Get Log Page (02h): Supported 00:14:46.168 Identify (06h): Supported 00:14:46.168 Abort (08h): Supported 00:14:46.168 Set Features (09h): Supported 00:14:46.168 Get Features (0Ah): Supported 00:14:46.168 Asynchronous Event Request (0Ch): Supported 00:14:46.168 Keep Alive (18h): Supported 00:14:46.168 I/O Commands 00:14:46.168 ------------ 00:14:46.168 Flush (00h): Supported LBA-Change 00:14:46.168 Write (01h): Supported LBA-Change 00:14:46.168 Read (02h): Supported 00:14:46.168 Compare (05h): Supported 00:14:46.168 Write Zeroes (08h): Supported LBA-Change 00:14:46.168 Dataset Management (09h): Supported LBA-Change 00:14:46.168 Copy (19h): Supported LBA-Change 00:14:46.168 00:14:46.168 Error Log 00:14:46.168 ========= 00:14:46.168 00:14:46.168 Arbitration 00:14:46.168 =========== 00:14:46.168 Arbitration Burst: 1 00:14:46.168 00:14:46.168 Power Management 00:14:46.168 ================ 00:14:46.168 Number of Power States: 1 00:14:46.168 Current Power State: Power State #0 00:14:46.168 Power State #0: 00:14:46.168 Max Power: 0.00 W 00:14:46.168 Non-Operational State: Operational 00:14:46.168 Entry Latency: Not Reported 00:14:46.168 Exit Latency: Not Reported 00:14:46.168 Relative Read Throughput: 0 00:14:46.168 Relative Read Latency: 0 00:14:46.168 Relative Write Throughput: 0 00:14:46.168 Relative Write Latency: 0 00:14:46.168 Idle Power: Not Reported 00:14:46.168 Active Power: Not Reported 00:14:46.168 Non-Operational Permissive Mode: Not Supported 00:14:46.168 00:14:46.168 Health Information 00:14:46.168 ================== 00:14:46.168 Critical Warnings: 00:14:46.168 Available Spare Space: OK 00:14:46.168 Temperature: OK 00:14:46.168 Device Reliability: OK 00:14:46.168 Read Only: No 00:14:46.168 Volatile Memory Backup: OK 00:14:46.168 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:46.168 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:46.168 Available Spare: 0% 00:14:46.168 Available Sp[2024-10-11 11:51:48.809271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:46.168 [2024-10-11 11:51:48.809279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:46.168 [2024-10-11 11:51:48.809302] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:46.169 [2024-10-11 11:51:48.809309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.169 [2024-10-11 11:51:48.809314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.169 [2024-10-11 11:51:48.809318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.169 [2024-10-11 11:51:48.809322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.169 [2024-10-11 11:51:48.809506] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:46.169 [2024-10-11 11:51:48.809514] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:46.169 [2024-10-11 11:51:48.810509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.169 [2024-10-11 11:51:48.810547] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:46.169 [2024-10-11 11:51:48.810552] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:46.169 [2024-10-11 11:51:48.811515] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:46.169 [2024-10-11 11:51:48.811523] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:46.169 [2024-10-11 11:51:48.811574] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:46.169 [2024-10-11 11:51:48.816067] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.169 are Threshold: 0% 00:14:46.169 Life Percentage Used: 0% 00:14:46.169 Data Units Read: 0 00:14:46.169 Data Units Written: 0 00:14:46.169 Host Read Commands: 0 00:14:46.169 Host Write Commands: 0 00:14:46.169 Controller Busy Time: 0 minutes 00:14:46.169 Power Cycles: 0 00:14:46.169 Power On Hours: 0 hours 00:14:46.169 Unsafe Shutdowns: 0 00:14:46.169 Unrecoverable Media Errors: 0 00:14:46.169 Lifetime Error Log Entries: 0 00:14:46.169 Warning Temperature Time: 0 minutes 00:14:46.169 Critical Temperature Time: 0 minutes 00:14:46.169 00:14:46.169 Number of Queues 00:14:46.169 ================ 00:14:46.169 Number of I/O Submission Queues: 127 00:14:46.169 Number of I/O Completion Queues: 127 00:14:46.169 00:14:46.169 Active Namespaces 00:14:46.169 ================= 00:14:46.169 Namespace ID:1 00:14:46.169 Error Recovery Timeout: Unlimited 00:14:46.169 Command Set Identifier: NVM (00h) 00:14:46.169 Deallocate: Supported 00:14:46.169 Deallocated/Unwritten Error: Not Supported 00:14:46.169 Deallocated Read Value: Unknown 00:14:46.169 Deallocate in Write Zeroes: Not Supported 00:14:46.169 Deallocated Guard Field: 0xFFFF 00:14:46.169 Flush: Supported 00:14:46.169 Reservation: Supported 00:14:46.169 Namespace Sharing Capabilities: Multiple Controllers 00:14:46.169 Size (in LBAs): 131072 (0GiB) 00:14:46.169 Capacity (in LBAs): 131072 (0GiB) 00:14:46.169 Utilization (in LBAs): 131072 (0GiB) 00:14:46.169 NGUID: 6DA437C962864FB08AF3FCA370E5F6AB 00:14:46.169 UUID: 6da437c9-6286-4fb0-8af3-fca370e5f6ab 00:14:46.169 Thin Provisioning: Not Supported 00:14:46.169 Per-NS Atomic Units: Yes 00:14:46.169 Atomic Boundary Size (Normal): 0 00:14:46.169 Atomic Boundary Size (PFail): 0 00:14:46.169 Atomic Boundary Offset: 0 00:14:46.169 Maximum Single Source Range Length: 65535 00:14:46.169 Maximum Copy Length: 65535 00:14:46.169 Maximum Source Range Count: 1 00:14:46.169 NGUID/EUI64 Never Reused: No 00:14:46.169 Namespace Write Protected: No 00:14:46.169 Number of LBA Formats: 1 00:14:46.169 Current LBA Format: LBA Format #00 00:14:46.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:46.169 00:14:46.169 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:46.430 [2024-10-11 11:51:48.984694] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.719 Initializing NVMe Controllers 00:14:51.719 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.719 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:51.719 Initialization complete. Launching workers. 00:14:51.719 ======================================================== 00:14:51.719 Latency(us) 00:14:51.719 Device Information : IOPS MiB/s Average min max 00:14:51.719 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39968.49 156.13 3202.39 851.28 9757.31 00:14:51.719 ======================================================== 00:14:51.719 Total : 39968.49 156.13 3202.39 851.28 9757.31 00:14:51.719 00:14:51.719 [2024-10-11 11:51:54.003050] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.719 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:51.719 [2024-10-11 11:51:54.186847] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.007 Initializing NVMe Controllers 00:14:57.007 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.007 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:57.007 Initialization complete. Launching workers. 00:14:57.007 ======================================================== 00:14:57.007 Latency(us) 00:14:57.007 Device Information : IOPS MiB/s Average min max 00:14:57.007 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.08 62.73 7976.91 5987.94 9974.56 00:14:57.007 ======================================================== 00:14:57.007 Total : 16059.08 62.73 7976.91 5987.94 9974.56 00:14:57.007 00:14:57.007 [2024-10-11 11:51:59.226758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.007 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:57.007 [2024-10-11 11:51:59.416597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.295 [2024-10-11 11:52:04.495308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.295 Initializing NVMe Controllers 00:15:02.295 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.295 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.295 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:02.295 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:02.296 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:02.296 Initialization complete. Launching workers. 00:15:02.296 Starting thread on core 2 00:15:02.296 Starting thread on core 3 00:15:02.296 Starting thread on core 1 00:15:02.296 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:02.296 [2024-10-11 11:52:04.737422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.597 [2024-10-11 11:52:07.905209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.597 Initializing NVMe Controllers 00:15:05.597 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.597 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.597 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:05.597 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:05.597 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:05.597 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:05.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:05.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:05.597 Initialization complete. Launching workers. 00:15:05.597 Starting thread on core 1 with urgent priority queue 00:15:05.597 Starting thread on core 2 with urgent priority queue 00:15:05.597 Starting thread on core 3 with urgent priority queue 00:15:05.597 Starting thread on core 0 with urgent priority queue 00:15:05.597 SPDK bdev Controller (SPDK1 ) core 0: 7652.33 IO/s 13.07 secs/100000 ios 00:15:05.597 SPDK bdev Controller (SPDK1 ) core 1: 11879.33 IO/s 8.42 secs/100000 ios 00:15:05.597 SPDK bdev Controller (SPDK1 ) core 2: 9455.67 IO/s 10.58 secs/100000 ios 00:15:05.597 SPDK bdev Controller (SPDK1 ) core 3: 11430.00 IO/s 8.75 secs/100000 ios 00:15:05.597 ======================================================== 00:15:05.597 00:15:05.597 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:05.597 [2024-10-11 11:52:08.128480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.597 Initializing NVMe Controllers 00:15:05.597 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.597 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:05.597 Namespace ID: 1 size: 0GB 00:15:05.597 Initialization complete. 00:15:05.597 INFO: using host memory buffer for IO 00:15:05.597 Hello world! 00:15:05.597 [2024-10-11 11:52:08.162669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.597 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:05.858 [2024-10-11 11:52:08.388459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.804 Initializing NVMe Controllers 00:15:06.804 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.804 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.804 Initialization complete. Launching workers. 00:15:06.804 submit (in ns) avg, min, max = 4768.6, 2862.5, 3998054.2 00:15:06.804 complete (in ns) avg, min, max = 17906.6, 1633.3, 3998556.7 00:15:06.804 00:15:06.804 Submit histogram 00:15:06.804 ================ 00:15:06.804 Range in us Cumulative Count 00:15:06.804 2.853 - 2.867: 0.0342% ( 7) 00:15:06.804 2.867 - 2.880: 0.4152% ( 78) 00:15:06.804 2.880 - 2.893: 1.5678% ( 236) 00:15:06.804 2.893 - 2.907: 3.5557% ( 407) 00:15:06.804 2.907 - 2.920: 7.1066% ( 727) 00:15:06.804 2.920 - 2.933: 11.3559% ( 870) 00:15:06.804 2.933 - 2.947: 16.9776% ( 1151) 00:15:06.804 2.947 - 2.960: 24.0207% ( 1442) 00:15:06.804 2.960 - 2.973: 31.3471% ( 1500) 00:15:06.804 2.973 - 2.987: 37.5696% ( 1274) 00:15:06.804 2.987 - 3.000: 43.5284% ( 1220) 00:15:06.804 3.000 - 3.013: 50.2296% ( 1372) 00:15:06.804 3.013 - 3.027: 58.7233% ( 1739) 00:15:06.804 3.027 - 3.040: 68.8971% ( 2083) 00:15:06.804 3.040 - 3.053: 77.4055% ( 1742) 00:15:06.804 3.053 - 3.067: 83.8185% ( 1313) 00:15:06.804 3.067 - 3.080: 89.0739% ( 1076) 00:15:06.804 3.080 - 3.093: 93.3721% ( 880) 00:15:06.804 3.093 - 3.107: 96.0535% ( 549) 00:15:06.804 3.107 - 3.120: 98.0072% ( 400) 00:15:06.804 3.120 - 3.133: 98.9548% ( 194) 00:15:06.804 3.133 - 3.147: 99.3992% ( 91) 00:15:06.804 3.147 - 3.160: 99.5897% ( 39) 00:15:06.804 3.160 - 3.173: 99.6386% ( 10) 00:15:06.804 3.173 - 3.187: 99.6435% ( 1) 00:15:06.804 3.187 - 3.200: 99.6581% ( 3) 00:15:06.804 3.227 - 3.240: 99.6630% ( 1) 00:15:06.804 3.387 - 3.400: 99.6679% ( 1) 00:15:06.804 3.467 - 3.493: 99.6728% ( 1) 00:15:06.804 3.573 - 3.600: 99.6776% ( 1) 00:15:06.804 3.707 - 3.733: 99.6825% ( 1) 00:15:06.804 3.760 - 3.787: 99.6874% ( 1) 00:15:06.804 3.813 - 3.840: 99.6972% ( 2) 00:15:06.804 3.920 - 3.947: 99.7021% ( 1) 00:15:06.804 4.027 - 4.053: 99.7069% ( 1) 00:15:06.804 4.053 - 4.080: 99.7118% ( 1) 00:15:06.804 4.080 - 4.107: 99.7167% ( 1) 00:15:06.804 4.133 - 4.160: 99.7216% ( 1) 00:15:06.804 4.320 - 4.347: 99.7265% ( 1) 00:15:06.804 4.400 - 4.427: 99.7314% ( 1) 00:15:06.804 4.427 - 4.453: 99.7363% ( 1) 00:15:06.804 4.533 - 4.560: 99.7460% ( 2) 00:15:06.804 4.560 - 4.587: 99.7509% ( 1) 00:15:06.804 4.613 - 4.640: 99.7607% ( 2) 00:15:06.804 4.720 - 4.747: 99.7704% ( 2) 00:15:06.805 4.827 - 4.853: 99.7753% ( 1) 00:15:06.805 4.880 - 4.907: 99.7802% ( 1) 00:15:06.805 4.933 - 4.960: 99.7900% ( 2) 00:15:06.805 5.040 - 5.067: 99.7949% ( 1) 00:15:06.805 5.067 - 5.093: 99.7997% ( 1) 00:15:06.805 5.093 - 5.120: 99.8095% ( 2) 00:15:06.805 5.120 - 5.147: 99.8144% ( 1) 00:15:06.805 5.147 - 5.173: 99.8242% ( 2) 00:15:06.805 5.173 - 5.200: 99.8291% ( 1) 00:15:06.805 5.200 - 5.227: 99.8339% ( 1) 00:15:06.805 5.280 - 5.307: 99.8388% ( 1) 00:15:06.805 5.333 - 5.360: 99.8437% ( 1) 00:15:06.805 5.360 - 5.387: 99.8486% ( 1) 00:15:06.805 5.413 - 5.440: 99.8535% ( 1) 00:15:06.805 5.440 - 5.467: 99.8584% ( 1) 00:15:06.805 5.493 - 5.520: 99.8730% ( 3) 00:15:06.805 5.547 - 5.573: 99.8779% ( 1) 00:15:06.805 5.573 - 5.600: 99.8828% ( 1) 00:15:06.805 5.680 - 5.707: 99.8877% ( 1) 00:15:06.805 5.707 - 5.733: 99.8925% ( 1) 00:15:06.805 5.733 - 5.760: 99.8974% ( 1) 00:15:06.805 5.867 - 5.893: 99.9023% ( 1) 00:15:06.805 5.973 - 6.000: 99.9072% ( 1) 00:15:06.805 6.053 - 6.080: 99.9121% ( 1) 00:15:06.805 6.160 - 6.187: 99.9219% ( 2) 00:15:06.805 6.240 - 6.267: 99.9267% ( 1) 00:15:06.805 6.293 - 6.320: 99.9365% ( 2) 00:15:06.805 6.320 - 6.347: 99.9414% ( 1) 00:15:06.805 6.587 - 6.613: 99.9512% ( 2) 00:15:06.805 7.200 - 7.253: 99.9560% ( 1) 00:15:06.805 3986.773 - 4014.080: 100.0000% ( 9) 00:15:06.805 00:15:06.805 Complete histogram 00:15:06.805 ================== 00:15:06.805 Range in us Cumulative Count 00:15:06.805 1.633 - 1.640: 0.0635% ( 13) 00:15:06.805 1.640 - [2024-10-11 11:52:09.408973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.805 1.647: 0.7082% ( 132) 00:15:06.805 1.647 - 1.653: 0.7668% ( 12) 00:15:06.805 1.653 - 1.660: 0.8108% ( 9) 00:15:06.805 1.660 - 1.667: 0.8792% ( 14) 00:15:06.805 1.667 - 1.673: 0.8987% ( 4) 00:15:06.805 1.673 - 1.680: 0.9085% ( 2) 00:15:06.805 1.680 - 1.687: 1.2553% ( 71) 00:15:06.805 1.687 - 1.693: 29.8183% ( 5848) 00:15:06.805 1.693 - 1.700: 47.0108% ( 3520) 00:15:06.805 1.700 - 1.707: 56.3593% ( 1914) 00:15:06.805 1.707 - 1.720: 75.5299% ( 3925) 00:15:06.805 1.720 - 1.733: 82.3581% ( 1398) 00:15:06.805 1.733 - 1.747: 83.6231% ( 259) 00:15:06.805 1.747 - 1.760: 87.5208% ( 798) 00:15:06.805 1.760 - 1.773: 93.0888% ( 1140) 00:15:06.805 1.773 - 1.787: 96.9278% ( 786) 00:15:06.805 1.787 - 1.800: 98.7741% ( 378) 00:15:06.805 1.800 - 1.813: 99.3064% ( 109) 00:15:06.805 1.813 - 1.827: 99.4334% ( 26) 00:15:06.805 1.827 - 1.840: 99.4481% ( 3) 00:15:06.805 3.307 - 3.320: 99.4530% ( 1) 00:15:06.805 3.360 - 3.373: 99.4578% ( 1) 00:15:06.805 3.413 - 3.440: 99.4676% ( 2) 00:15:06.805 3.440 - 3.467: 99.4725% ( 1) 00:15:06.805 3.467 - 3.493: 99.4823% ( 2) 00:15:06.805 3.600 - 3.627: 99.4872% ( 1) 00:15:06.805 3.653 - 3.680: 99.4920% ( 1) 00:15:06.805 3.707 - 3.733: 99.4969% ( 1) 00:15:06.805 3.733 - 3.760: 99.5018% ( 1) 00:15:06.805 3.840 - 3.867: 99.5067% ( 1) 00:15:06.805 3.893 - 3.920: 99.5116% ( 1) 00:15:06.805 3.920 - 3.947: 99.5165% ( 1) 00:15:06.805 3.973 - 4.000: 99.5213% ( 1) 00:15:06.805 4.080 - 4.107: 99.5262% ( 1) 00:15:06.805 4.107 - 4.133: 99.5311% ( 1) 00:15:06.805 4.133 - 4.160: 99.5360% ( 1) 00:15:06.805 4.240 - 4.267: 99.5458% ( 2) 00:15:06.805 4.533 - 4.560: 99.5506% ( 1) 00:15:06.805 4.560 - 4.587: 99.5555% ( 1) 00:15:06.805 4.613 - 4.640: 99.5604% ( 1) 00:15:06.805 4.933 - 4.960: 99.5653% ( 1) 00:15:06.805 5.067 - 5.093: 99.5751% ( 2) 00:15:06.805 5.093 - 5.120: 99.5800% ( 1) 00:15:06.805 5.333 - 5.360: 99.5848% ( 1) 00:15:06.805 5.947 - 5.973: 99.5897% ( 1) 00:15:06.805 10.880 - 10.933: 99.5946% ( 1) 00:15:06.805 3986.773 - 4014.080: 100.0000% ( 83) 00:15:06.805 00:15:06.805 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:06.805 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:06.805 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:06.805 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:06.805 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:07.082 [ 00:15:07.082 { 00:15:07.082 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:07.082 "subtype": "Discovery", 00:15:07.082 "listen_addresses": [], 00:15:07.082 "allow_any_host": true, 00:15:07.082 "hosts": [] 00:15:07.082 }, 00:15:07.082 { 00:15:07.082 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:07.082 "subtype": "NVMe", 00:15:07.082 "listen_addresses": [ 00:15:07.082 { 00:15:07.082 "trtype": "VFIOUSER", 00:15:07.082 "adrfam": "IPv4", 00:15:07.082 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:07.082 "trsvcid": "0" 00:15:07.082 } 00:15:07.082 ], 00:15:07.082 "allow_any_host": true, 00:15:07.082 "hosts": [], 00:15:07.082 "serial_number": "SPDK1", 00:15:07.082 "model_number": "SPDK bdev Controller", 00:15:07.082 "max_namespaces": 32, 00:15:07.082 "min_cntlid": 1, 00:15:07.082 "max_cntlid": 65519, 00:15:07.082 "namespaces": [ 00:15:07.082 { 00:15:07.082 "nsid": 1, 00:15:07.082 "bdev_name": "Malloc1", 00:15:07.082 "name": "Malloc1", 00:15:07.082 "nguid": "6DA437C962864FB08AF3FCA370E5F6AB", 00:15:07.082 "uuid": "6da437c9-6286-4fb0-8af3-fca370e5f6ab" 00:15:07.082 } 00:15:07.082 ] 00:15:07.082 }, 00:15:07.082 { 00:15:07.082 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:07.082 "subtype": "NVMe", 00:15:07.082 "listen_addresses": [ 00:15:07.082 { 00:15:07.082 "trtype": "VFIOUSER", 00:15:07.082 "adrfam": "IPv4", 00:15:07.082 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:07.082 "trsvcid": "0" 00:15:07.082 } 00:15:07.082 ], 00:15:07.082 "allow_any_host": true, 00:15:07.082 "hosts": [], 00:15:07.082 "serial_number": "SPDK2", 00:15:07.082 "model_number": "SPDK bdev Controller", 00:15:07.082 "max_namespaces": 32, 00:15:07.082 "min_cntlid": 1, 00:15:07.082 "max_cntlid": 65519, 00:15:07.082 "namespaces": [ 00:15:07.082 { 00:15:07.082 "nsid": 1, 00:15:07.082 "bdev_name": "Malloc2", 00:15:07.082 "name": "Malloc2", 00:15:07.082 "nguid": "AD48E470B2A548BAA993AA5FE45628F9", 00:15:07.082 "uuid": "ad48e470-b2a5-48ba-a993-aa5fe45628f9" 00:15:07.082 } 00:15:07.082 ] 00:15:07.082 } 00:15:07.082 ] 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1883157 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:07.082 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:07.082 [2024-10-11 11:52:09.774474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.384 Malloc3 00:15:07.384 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:07.384 [2024-10-11 11:52:09.986895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.384 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:07.384 Asynchronous Event Request test 00:15:07.384 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.384 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.384 Registering asynchronous event callbacks... 00:15:07.384 Starting namespace attribute notice tests for all controllers... 00:15:07.384 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:07.384 aer_cb - Changed Namespace 00:15:07.384 Cleaning up... 00:15:07.708 [ 00:15:07.708 { 00:15:07.708 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:07.708 "subtype": "Discovery", 00:15:07.708 "listen_addresses": [], 00:15:07.708 "allow_any_host": true, 00:15:07.708 "hosts": [] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:07.708 "subtype": "NVMe", 00:15:07.708 "listen_addresses": [ 00:15:07.708 { 00:15:07.708 "trtype": "VFIOUSER", 00:15:07.708 "adrfam": "IPv4", 00:15:07.708 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:07.708 "trsvcid": "0" 00:15:07.708 } 00:15:07.708 ], 00:15:07.708 "allow_any_host": true, 00:15:07.708 "hosts": [], 00:15:07.708 "serial_number": "SPDK1", 00:15:07.708 "model_number": "SPDK bdev Controller", 00:15:07.708 "max_namespaces": 32, 00:15:07.708 "min_cntlid": 1, 00:15:07.708 "max_cntlid": 65519, 00:15:07.708 "namespaces": [ 00:15:07.708 { 00:15:07.708 "nsid": 1, 00:15:07.708 "bdev_name": "Malloc1", 00:15:07.708 "name": "Malloc1", 00:15:07.708 "nguid": "6DA437C962864FB08AF3FCA370E5F6AB", 00:15:07.708 "uuid": "6da437c9-6286-4fb0-8af3-fca370e5f6ab" 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "nsid": 2, 00:15:07.708 "bdev_name": "Malloc3", 00:15:07.708 "name": "Malloc3", 00:15:07.708 "nguid": "BDE7CFAFC3FA4A7FAFD2FCA99FC2B97D", 00:15:07.708 "uuid": "bde7cfaf-c3fa-4a7f-afd2-fca99fc2b97d" 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:07.708 "subtype": "NVMe", 00:15:07.708 "listen_addresses": [ 00:15:07.708 { 00:15:07.708 "trtype": "VFIOUSER", 00:15:07.708 "adrfam": "IPv4", 00:15:07.708 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:07.708 "trsvcid": "0" 00:15:07.708 } 00:15:07.708 ], 00:15:07.708 "allow_any_host": true, 00:15:07.708 "hosts": [], 00:15:07.708 "serial_number": "SPDK2", 00:15:07.708 "model_number": "SPDK bdev Controller", 00:15:07.708 "max_namespaces": 32, 00:15:07.708 "min_cntlid": 1, 00:15:07.708 "max_cntlid": 65519, 00:15:07.708 "namespaces": [ 00:15:07.708 { 00:15:07.708 "nsid": 1, 00:15:07.708 "bdev_name": "Malloc2", 00:15:07.708 "name": "Malloc2", 00:15:07.708 "nguid": "AD48E470B2A548BAA993AA5FE45628F9", 00:15:07.708 "uuid": "ad48e470-b2a5-48ba-a993-aa5fe45628f9" 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1883157 00:15:07.708 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:07.708 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:07.708 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:07.708 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:07.708 [2024-10-11 11:52:10.228817] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:07.708 [2024-10-11 11:52:10.228882] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883368 ] 00:15:07.708 [2024-10-11 11:52:10.258141] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:07.708 [2024-10-11 11:52:10.269907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:07.708 [2024-10-11 11:52:10.269925] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f798fa78000 00:15:07.708 [2024-10-11 11:52:10.270908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.708 [2024-10-11 11:52:10.271910] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.708 [2024-10-11 11:52:10.272921] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.708 [2024-10-11 11:52:10.273922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.708 [2024-10-11 11:52:10.274928] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.709 [2024-10-11 11:52:10.275939] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.709 [2024-10-11 11:52:10.276948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:07.709 [2024-10-11 11:52:10.277958] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:07.709 [2024-10-11 11:52:10.278965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:07.709 [2024-10-11 11:52:10.278976] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f798fa6d000 00:15:07.709 [2024-10-11 11:52:10.279890] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:07.709 [2024-10-11 11:52:10.291262] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:07.709 [2024-10-11 11:52:10.291284] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:07.709 [2024-10-11 11:52:10.293328] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:07.709 [2024-10-11 11:52:10.293360] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:07.709 [2024-10-11 11:52:10.293424] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:07.709 [2024-10-11 11:52:10.293440] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:07.709 [2024-10-11 11:52:10.293444] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:07.709 [2024-10-11 11:52:10.295068] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:07.709 [2024-10-11 11:52:10.295077] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:07.709 [2024-10-11 11:52:10.295082] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:07.709 [2024-10-11 11:52:10.295336] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:07.709 [2024-10-11 11:52:10.295342] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:07.709 [2024-10-11 11:52:10.295347] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:07.709 [2024-10-11 11:52:10.296341] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:07.709 [2024-10-11 11:52:10.296348] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:07.709 [2024-10-11 11:52:10.297346] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:07.709 [2024-10-11 11:52:10.297353] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:07.709 [2024-10-11 11:52:10.297357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:07.709 [2024-10-11 11:52:10.297361] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:07.709 [2024-10-11 11:52:10.297465] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:07.709 [2024-10-11 11:52:10.297469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:07.709 [2024-10-11 11:52:10.297473] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:07.709 [2024-10-11 11:52:10.298353] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:07.709 [2024-10-11 11:52:10.299355] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:07.709 [2024-10-11 11:52:10.300361] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:07.709 [2024-10-11 11:52:10.301364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.709 [2024-10-11 11:52:10.301399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:07.709 [2024-10-11 11:52:10.302370] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:07.709 [2024-10-11 11:52:10.302377] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:07.709 [2024-10-11 11:52:10.302382] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.302397] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:07.709 [2024-10-11 11:52:10.302402] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.302414] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.709 [2024-10-11 11:52:10.302417] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.709 [2024-10-11 11:52:10.302420] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.709 [2024-10-11 11:52:10.302430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.709 [2024-10-11 11:52:10.313070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:07.709 [2024-10-11 11:52:10.313080] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:07.709 [2024-10-11 11:52:10.313084] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:07.709 [2024-10-11 11:52:10.313087] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:07.709 [2024-10-11 11:52:10.313090] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:07.709 [2024-10-11 11:52:10.313093] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:07.709 [2024-10-11 11:52:10.313097] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:07.709 [2024-10-11 11:52:10.313100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.313105] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.313113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:07.709 [2024-10-11 11:52:10.321067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:07.709 [2024-10-11 11:52:10.321078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.709 [2024-10-11 11:52:10.321084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.709 [2024-10-11 11:52:10.321090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.709 [2024-10-11 11:52:10.321097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.709 [2024-10-11 11:52:10.321100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.321107] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.321114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:07.709 [2024-10-11 11:52:10.329066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:07.709 [2024-10-11 11:52:10.329075] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:07.709 [2024-10-11 11:52:10.329079] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.329083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.329089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.329096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:07.709 [2024-10-11 11:52:10.337069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:07.709 [2024-10-11 11:52:10.337115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.337121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.337127] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:07.709 [2024-10-11 11:52:10.337130] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:07.709 [2024-10-11 11:52:10.337132] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.709 [2024-10-11 11:52:10.337137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:07.709 [2024-10-11 11:52:10.345069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:07.709 [2024-10-11 11:52:10.345078] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:07.709 [2024-10-11 11:52:10.345088] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.345093] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.345098] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.709 [2024-10-11 11:52:10.345101] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.709 [2024-10-11 11:52:10.345104] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.709 [2024-10-11 11:52:10.345108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.709 [2024-10-11 11:52:10.353067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:07.709 [2024-10-11 11:52:10.353079] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.353085] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:07.709 [2024-10-11 11:52:10.353090] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:07.710 [2024-10-11 11:52:10.353093] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.710 [2024-10-11 11:52:10.353095] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.710 [2024-10-11 11:52:10.353103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.710 [2024-10-11 11:52:10.361067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:07.710 [2024-10-11 11:52:10.361075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:07.710 [2024-10-11 11:52:10.361080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:07.710 [2024-10-11 11:52:10.361086] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:07.710 [2024-10-11 11:52:10.361090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:07.710 [2024-10-11 11:52:10.361093] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:07.710 [2024-10-11 11:52:10.361097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:07.710 [2024-10-11 11:52:10.361101] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:07.710 [2024-10-11 11:52:10.361104] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:07.710 [2024-10-11 11:52:10.361108] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:07.710 [2024-10-11 11:52:10.361123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:07.710 [2024-10-11 11:52:10.369069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:07.710 [2024-10-11 11:52:10.369079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:07.710 [2024-10-11 11:52:10.377068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:07.710 [2024-10-11 11:52:10.377078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:07.710 [2024-10-11 11:52:10.385066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:07.710 [2024-10-11 11:52:10.385076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:07.710 [2024-10-11 11:52:10.393069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:07.710 [2024-10-11 11:52:10.393082] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:07.710 [2024-10-11 11:52:10.393085] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:07.710 [2024-10-11 11:52:10.393088] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:07.710 [2024-10-11 11:52:10.393090] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:07.710 [2024-10-11 11:52:10.393093] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:07.710 [2024-10-11 11:52:10.393097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:07.710 [2024-10-11 11:52:10.393103] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:07.710 [2024-10-11 11:52:10.393108] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:07.710 [2024-10-11 11:52:10.393110] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.710 [2024-10-11 11:52:10.393114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:07.710 [2024-10-11 11:52:10.393120] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:07.710 [2024-10-11 11:52:10.393123] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:07.710 [2024-10-11 11:52:10.393125] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.710 [2024-10-11 11:52:10.393129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:07.710 [2024-10-11 11:52:10.393135] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:07.710 [2024-10-11 11:52:10.393138] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:07.710 [2024-10-11 11:52:10.393140] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:07.710 [2024-10-11 11:52:10.393144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:07.710 [2024-10-11 11:52:10.401069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:07.710 [2024-10-11 11:52:10.401080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:07.710 [2024-10-11 11:52:10.401088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:07.710 [2024-10-11 11:52:10.401093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:07.710 ===================================================== 00:15:07.710 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:07.710 ===================================================== 00:15:07.710 Controller Capabilities/Features 00:15:07.710 ================================ 00:15:07.710 Vendor ID: 4e58 00:15:07.710 Subsystem Vendor ID: 4e58 00:15:07.710 Serial Number: SPDK2 00:15:07.710 Model Number: SPDK bdev Controller 00:15:07.710 Firmware Version: 25.01 00:15:07.710 Recommended Arb Burst: 6 00:15:07.710 IEEE OUI Identifier: 8d 6b 50 00:15:07.710 Multi-path I/O 00:15:07.710 May have multiple subsystem ports: Yes 00:15:07.710 May have multiple controllers: Yes 00:15:07.710 Associated with SR-IOV VF: No 00:15:07.710 Max Data Transfer Size: 131072 00:15:07.710 Max Number of Namespaces: 32 00:15:07.710 Max Number of I/O Queues: 127 00:15:07.710 NVMe Specification Version (VS): 1.3 00:15:07.710 NVMe Specification Version (Identify): 1.3 00:15:07.710 Maximum Queue Entries: 256 00:15:07.710 Contiguous Queues Required: Yes 00:15:07.710 Arbitration Mechanisms Supported 00:15:07.710 Weighted Round Robin: Not Supported 00:15:07.710 Vendor Specific: Not Supported 00:15:07.710 Reset Timeout: 15000 ms 00:15:07.710 Doorbell Stride: 4 bytes 00:15:07.710 NVM Subsystem Reset: Not Supported 00:15:07.710 Command Sets Supported 00:15:07.710 NVM Command Set: Supported 00:15:07.710 Boot Partition: Not Supported 00:15:07.710 Memory Page Size Minimum: 4096 bytes 00:15:07.710 Memory Page Size Maximum: 4096 bytes 00:15:07.710 Persistent Memory Region: Not Supported 00:15:07.710 Optional Asynchronous Events Supported 00:15:07.710 Namespace Attribute Notices: Supported 00:15:07.710 Firmware Activation Notices: Not Supported 00:15:07.710 ANA Change Notices: Not Supported 00:15:07.710 PLE Aggregate Log Change Notices: Not Supported 00:15:07.710 LBA Status Info Alert Notices: Not Supported 00:15:07.710 EGE Aggregate Log Change Notices: Not Supported 00:15:07.710 Normal NVM Subsystem Shutdown event: Not Supported 00:15:07.710 Zone Descriptor Change Notices: Not Supported 00:15:07.710 Discovery Log Change Notices: Not Supported 00:15:07.710 Controller Attributes 00:15:07.710 128-bit Host Identifier: Supported 00:15:07.710 Non-Operational Permissive Mode: Not Supported 00:15:07.710 NVM Sets: Not Supported 00:15:07.710 Read Recovery Levels: Not Supported 00:15:07.710 Endurance Groups: Not Supported 00:15:07.710 Predictable Latency Mode: Not Supported 00:15:07.710 Traffic Based Keep ALive: Not Supported 00:15:07.710 Namespace Granularity: Not Supported 00:15:07.710 SQ Associations: Not Supported 00:15:07.710 UUID List: Not Supported 00:15:07.710 Multi-Domain Subsystem: Not Supported 00:15:07.710 Fixed Capacity Management: Not Supported 00:15:07.710 Variable Capacity Management: Not Supported 00:15:07.710 Delete Endurance Group: Not Supported 00:15:07.710 Delete NVM Set: Not Supported 00:15:07.710 Extended LBA Formats Supported: Not Supported 00:15:07.710 Flexible Data Placement Supported: Not Supported 00:15:07.710 00:15:07.710 Controller Memory Buffer Support 00:15:07.710 ================================ 00:15:07.710 Supported: No 00:15:07.710 00:15:07.710 Persistent Memory Region Support 00:15:07.710 ================================ 00:15:07.710 Supported: No 00:15:07.710 00:15:07.710 Admin Command Set Attributes 00:15:07.710 ============================ 00:15:07.710 Security Send/Receive: Not Supported 00:15:07.710 Format NVM: Not Supported 00:15:07.710 Firmware Activate/Download: Not Supported 00:15:07.710 Namespace Management: Not Supported 00:15:07.710 Device Self-Test: Not Supported 00:15:07.710 Directives: Not Supported 00:15:07.710 NVMe-MI: Not Supported 00:15:07.710 Virtualization Management: Not Supported 00:15:07.710 Doorbell Buffer Config: Not Supported 00:15:07.710 Get LBA Status Capability: Not Supported 00:15:07.710 Command & Feature Lockdown Capability: Not Supported 00:15:07.710 Abort Command Limit: 4 00:15:07.710 Async Event Request Limit: 4 00:15:07.710 Number of Firmware Slots: N/A 00:15:07.710 Firmware Slot 1 Read-Only: N/A 00:15:07.710 Firmware Activation Without Reset: N/A 00:15:07.710 Multiple Update Detection Support: N/A 00:15:07.710 Firmware Update Granularity: No Information Provided 00:15:07.710 Per-Namespace SMART Log: No 00:15:07.710 Asymmetric Namespace Access Log Page: Not Supported 00:15:07.710 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:07.710 Command Effects Log Page: Supported 00:15:07.710 Get Log Page Extended Data: Supported 00:15:07.710 Telemetry Log Pages: Not Supported 00:15:07.710 Persistent Event Log Pages: Not Supported 00:15:07.710 Supported Log Pages Log Page: May Support 00:15:07.710 Commands Supported & Effects Log Page: Not Supported 00:15:07.711 Feature Identifiers & Effects Log Page:May Support 00:15:07.711 NVMe-MI Commands & Effects Log Page: May Support 00:15:07.711 Data Area 4 for Telemetry Log: Not Supported 00:15:07.711 Error Log Page Entries Supported: 128 00:15:07.711 Keep Alive: Supported 00:15:07.711 Keep Alive Granularity: 10000 ms 00:15:07.711 00:15:07.711 NVM Command Set Attributes 00:15:07.711 ========================== 00:15:07.711 Submission Queue Entry Size 00:15:07.711 Max: 64 00:15:07.711 Min: 64 00:15:07.711 Completion Queue Entry Size 00:15:07.711 Max: 16 00:15:07.711 Min: 16 00:15:07.711 Number of Namespaces: 32 00:15:07.711 Compare Command: Supported 00:15:07.711 Write Uncorrectable Command: Not Supported 00:15:07.711 Dataset Management Command: Supported 00:15:07.711 Write Zeroes Command: Supported 00:15:07.711 Set Features Save Field: Not Supported 00:15:07.711 Reservations: Not Supported 00:15:07.711 Timestamp: Not Supported 00:15:07.711 Copy: Supported 00:15:07.711 Volatile Write Cache: Present 00:15:07.711 Atomic Write Unit (Normal): 1 00:15:07.711 Atomic Write Unit (PFail): 1 00:15:07.711 Atomic Compare & Write Unit: 1 00:15:07.711 Fused Compare & Write: Supported 00:15:07.711 Scatter-Gather List 00:15:07.711 SGL Command Set: Supported (Dword aligned) 00:15:07.711 SGL Keyed: Not Supported 00:15:07.711 SGL Bit Bucket Descriptor: Not Supported 00:15:07.711 SGL Metadata Pointer: Not Supported 00:15:07.711 Oversized SGL: Not Supported 00:15:07.711 SGL Metadata Address: Not Supported 00:15:07.711 SGL Offset: Not Supported 00:15:07.711 Transport SGL Data Block: Not Supported 00:15:07.711 Replay Protected Memory Block: Not Supported 00:15:07.711 00:15:07.711 Firmware Slot Information 00:15:07.711 ========================= 00:15:07.711 Active slot: 1 00:15:07.711 Slot 1 Firmware Revision: 25.01 00:15:07.711 00:15:07.711 00:15:07.711 Commands Supported and Effects 00:15:07.711 ============================== 00:15:07.711 Admin Commands 00:15:07.711 -------------- 00:15:07.711 Get Log Page (02h): Supported 00:15:07.711 Identify (06h): Supported 00:15:07.711 Abort (08h): Supported 00:15:07.711 Set Features (09h): Supported 00:15:07.711 Get Features (0Ah): Supported 00:15:07.711 Asynchronous Event Request (0Ch): Supported 00:15:07.711 Keep Alive (18h): Supported 00:15:07.711 I/O Commands 00:15:07.711 ------------ 00:15:07.711 Flush (00h): Supported LBA-Change 00:15:07.711 Write (01h): Supported LBA-Change 00:15:07.711 Read (02h): Supported 00:15:07.711 Compare (05h): Supported 00:15:07.711 Write Zeroes (08h): Supported LBA-Change 00:15:07.711 Dataset Management (09h): Supported LBA-Change 00:15:07.711 Copy (19h): Supported LBA-Change 00:15:07.711 00:15:07.711 Error Log 00:15:07.711 ========= 00:15:07.711 00:15:07.711 Arbitration 00:15:07.711 =========== 00:15:07.711 Arbitration Burst: 1 00:15:07.711 00:15:07.711 Power Management 00:15:07.711 ================ 00:15:07.711 Number of Power States: 1 00:15:07.711 Current Power State: Power State #0 00:15:07.711 Power State #0: 00:15:07.711 Max Power: 0.00 W 00:15:07.711 Non-Operational State: Operational 00:15:07.711 Entry Latency: Not Reported 00:15:07.711 Exit Latency: Not Reported 00:15:07.711 Relative Read Throughput: 0 00:15:07.711 Relative Read Latency: 0 00:15:07.711 Relative Write Throughput: 0 00:15:07.711 Relative Write Latency: 0 00:15:07.711 Idle Power: Not Reported 00:15:07.711 Active Power: Not Reported 00:15:07.711 Non-Operational Permissive Mode: Not Supported 00:15:07.711 00:15:07.711 Health Information 00:15:07.711 ================== 00:15:07.711 Critical Warnings: 00:15:07.711 Available Spare Space: OK 00:15:07.711 Temperature: OK 00:15:07.711 Device Reliability: OK 00:15:07.711 Read Only: No 00:15:07.711 Volatile Memory Backup: OK 00:15:07.711 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:07.711 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:07.711 Available Spare: 0% 00:15:07.711 Available Sp[2024-10-11 11:52:10.401166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:07.711 [2024-10-11 11:52:10.409070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:07.711 [2024-10-11 11:52:10.409098] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:07.711 [2024-10-11 11:52:10.409105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.711 [2024-10-11 11:52:10.409109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.711 [2024-10-11 11:52:10.409114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.711 [2024-10-11 11:52:10.409119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.711 [2024-10-11 11:52:10.409153] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:07.711 [2024-10-11 11:52:10.409161] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:07.711 [2024-10-11 11:52:10.410165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.711 [2024-10-11 11:52:10.410200] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:07.711 [2024-10-11 11:52:10.410205] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:07.971 [2024-10-11 11:52:10.411171] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:07.971 [2024-10-11 11:52:10.411185] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:07.971 [2024-10-11 11:52:10.411232] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:07.971 [2024-10-11 11:52:10.412192] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:07.971 are Threshold: 0% 00:15:07.971 Life Percentage Used: 0% 00:15:07.971 Data Units Read: 0 00:15:07.971 Data Units Written: 0 00:15:07.971 Host Read Commands: 0 00:15:07.971 Host Write Commands: 0 00:15:07.971 Controller Busy Time: 0 minutes 00:15:07.971 Power Cycles: 0 00:15:07.971 Power On Hours: 0 hours 00:15:07.971 Unsafe Shutdowns: 0 00:15:07.971 Unrecoverable Media Errors: 0 00:15:07.971 Lifetime Error Log Entries: 0 00:15:07.971 Warning Temperature Time: 0 minutes 00:15:07.971 Critical Temperature Time: 0 minutes 00:15:07.971 00:15:07.971 Number of Queues 00:15:07.971 ================ 00:15:07.971 Number of I/O Submission Queues: 127 00:15:07.971 Number of I/O Completion Queues: 127 00:15:07.971 00:15:07.971 Active Namespaces 00:15:07.971 ================= 00:15:07.971 Namespace ID:1 00:15:07.971 Error Recovery Timeout: Unlimited 00:15:07.971 Command Set Identifier: NVM (00h) 00:15:07.971 Deallocate: Supported 00:15:07.971 Deallocated/Unwritten Error: Not Supported 00:15:07.971 Deallocated Read Value: Unknown 00:15:07.971 Deallocate in Write Zeroes: Not Supported 00:15:07.971 Deallocated Guard Field: 0xFFFF 00:15:07.971 Flush: Supported 00:15:07.971 Reservation: Supported 00:15:07.971 Namespace Sharing Capabilities: Multiple Controllers 00:15:07.971 Size (in LBAs): 131072 (0GiB) 00:15:07.971 Capacity (in LBAs): 131072 (0GiB) 00:15:07.971 Utilization (in LBAs): 131072 (0GiB) 00:15:07.971 NGUID: AD48E470B2A548BAA993AA5FE45628F9 00:15:07.971 UUID: ad48e470-b2a5-48ba-a993-aa5fe45628f9 00:15:07.971 Thin Provisioning: Not Supported 00:15:07.971 Per-NS Atomic Units: Yes 00:15:07.971 Atomic Boundary Size (Normal): 0 00:15:07.971 Atomic Boundary Size (PFail): 0 00:15:07.971 Atomic Boundary Offset: 0 00:15:07.972 Maximum Single Source Range Length: 65535 00:15:07.972 Maximum Copy Length: 65535 00:15:07.972 Maximum Source Range Count: 1 00:15:07.972 NGUID/EUI64 Never Reused: No 00:15:07.972 Namespace Write Protected: No 00:15:07.972 Number of LBA Formats: 1 00:15:07.972 Current LBA Format: LBA Format #00 00:15:07.972 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:07.972 00:15:07.972 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:07.972 [2024-10-11 11:52:10.590083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.252 Initializing NVMe Controllers 00:15:13.252 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.252 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:13.252 Initialization complete. Launching workers. 00:15:13.252 ======================================================== 00:15:13.252 Latency(us) 00:15:13.252 Device Information : IOPS MiB/s Average min max 00:15:13.252 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40053.55 156.46 3195.40 846.49 6960.05 00:15:13.252 ======================================================== 00:15:13.252 Total : 40053.55 156.46 3195.40 846.49 6960.05 00:15:13.252 00:15:13.252 [2024-10-11 11:52:15.694245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.252 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:13.252 [2024-10-11 11:52:15.874790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.532 Initializing NVMe Controllers 00:15:18.532 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:18.532 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:18.532 Initialization complete. Launching workers. 00:15:18.532 ======================================================== 00:15:18.532 Latency(us) 00:15:18.532 Device Information : IOPS MiB/s Average min max 00:15:18.532 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39977.38 156.16 3201.69 856.30 7747.43 00:15:18.532 ======================================================== 00:15:18.532 Total : 39977.38 156.16 3201.69 856.30 7747.43 00:15:18.532 00:15:18.532 [2024-10-11 11:52:20.892056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.532 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:18.532 [2024-10-11 11:52:21.078219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.814 [2024-10-11 11:52:26.219150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.814 Initializing NVMe Controllers 00:15:23.814 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.814 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.814 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:23.814 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:23.814 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:23.814 Initialization complete. Launching workers. 00:15:23.814 Starting thread on core 2 00:15:23.814 Starting thread on core 3 00:15:23.814 Starting thread on core 1 00:15:23.814 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:23.814 [2024-10-11 11:52:26.460013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.114 [2024-10-11 11:52:29.544067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.114 Initializing NVMe Controllers 00:15:27.114 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.114 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.114 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:27.114 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:27.114 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:27.114 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:27.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:27.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:27.114 Initialization complete. Launching workers. 00:15:27.114 Starting thread on core 1 with urgent priority queue 00:15:27.114 Starting thread on core 2 with urgent priority queue 00:15:27.114 Starting thread on core 3 with urgent priority queue 00:15:27.114 Starting thread on core 0 with urgent priority queue 00:15:27.114 SPDK bdev Controller (SPDK2 ) core 0: 11217.00 IO/s 8.92 secs/100000 ios 00:15:27.114 SPDK bdev Controller (SPDK2 ) core 1: 7670.00 IO/s 13.04 secs/100000 ios 00:15:27.114 SPDK bdev Controller (SPDK2 ) core 2: 7697.33 IO/s 12.99 secs/100000 ios 00:15:27.114 SPDK bdev Controller (SPDK2 ) core 3: 10857.33 IO/s 9.21 secs/100000 ios 00:15:27.114 ======================================================== 00:15:27.114 00:15:27.114 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:27.114 [2024-10-11 11:52:29.771482] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.114 Initializing NVMe Controllers 00:15:27.114 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.114 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:27.114 Namespace ID: 1 size: 0GB 00:15:27.114 Initialization complete. 00:15:27.114 INFO: using host memory buffer for IO 00:15:27.114 Hello world! 00:15:27.114 [2024-10-11 11:52:29.781548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.114 11:52:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:27.375 [2024-10-11 11:52:30.006789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.761 Initializing NVMe Controllers 00:15:28.761 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.761 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.761 Initialization complete. Launching workers. 00:15:28.761 submit (in ns) avg, min, max = 5664.9, 2836.7, 3999764.2 00:15:28.761 complete (in ns) avg, min, max = 16900.9, 1632.5, 4056885.0 00:15:28.761 00:15:28.761 Submit histogram 00:15:28.761 ================ 00:15:28.761 Range in us Cumulative Count 00:15:28.761 2.827 - 2.840: 0.0144% ( 3) 00:15:28.761 2.840 - 2.853: 0.4816% ( 97) 00:15:28.761 2.853 - 2.867: 1.8347% ( 281) 00:15:28.761 2.867 - 2.880: 4.1462% ( 480) 00:15:28.761 2.880 - 2.893: 7.7290% ( 744) 00:15:28.761 2.893 - 2.907: 12.0678% ( 901) 00:15:28.761 2.907 - 2.920: 16.9941% ( 1023) 00:15:28.761 2.920 - 2.933: 22.2816% ( 1098) 00:15:28.761 2.933 - 2.947: 28.3540% ( 1261) 00:15:28.761 2.947 - 2.960: 34.6046% ( 1298) 00:15:28.761 2.960 - 2.973: 41.2068% ( 1371) 00:15:28.761 2.973 - 2.987: 47.2648% ( 1258) 00:15:28.761 2.987 - 3.000: 54.9360% ( 1593) 00:15:28.761 3.000 - 3.013: 65.6747% ( 2230) 00:15:28.761 3.013 - 3.027: 75.9655% ( 2137) 00:15:28.761 3.027 - 3.040: 83.5356% ( 1572) 00:15:28.761 3.040 - 3.053: 88.6738% ( 1067) 00:15:28.761 3.053 - 3.067: 93.0511% ( 909) 00:15:28.761 3.067 - 3.080: 96.1813% ( 650) 00:15:28.761 3.080 - 3.093: 97.6548% ( 306) 00:15:28.761 3.093 - 3.107: 98.6902% ( 215) 00:15:28.761 3.107 - 3.120: 99.2488% ( 116) 00:15:28.761 3.120 - 3.133: 99.4510% ( 42) 00:15:28.761 3.133 - 3.147: 99.5810% ( 27) 00:15:28.761 3.147 - 3.160: 99.5955% ( 3) 00:15:28.761 3.173 - 3.187: 99.6003% ( 1) 00:15:28.761 3.200 - 3.213: 99.6051% ( 1) 00:15:28.761 3.280 - 3.293: 99.6099% ( 1) 00:15:28.761 3.293 - 3.307: 99.6148% ( 1) 00:15:28.761 3.320 - 3.333: 99.6196% ( 1) 00:15:28.761 3.347 - 3.360: 99.6244% ( 1) 00:15:28.761 3.387 - 3.400: 99.6292% ( 1) 00:15:28.761 3.493 - 3.520: 99.6340% ( 1) 00:15:28.761 3.733 - 3.760: 99.6388% ( 1) 00:15:28.761 3.840 - 3.867: 99.6436% ( 1) 00:15:28.761 3.893 - 3.920: 99.6485% ( 1) 00:15:28.761 4.133 - 4.160: 99.6533% ( 1) 00:15:28.761 4.320 - 4.347: 99.6581% ( 1) 00:15:28.761 4.587 - 4.613: 99.6629% ( 1) 00:15:28.761 4.613 - 4.640: 99.6677% ( 1) 00:15:28.761 4.880 - 4.907: 99.6725% ( 1) 00:15:28.761 4.907 - 4.933: 99.6774% ( 1) 00:15:28.761 4.933 - 4.960: 99.6822% ( 1) 00:15:28.761 4.987 - 5.013: 99.6966% ( 3) 00:15:28.761 5.040 - 5.067: 99.7014% ( 1) 00:15:28.761 5.067 - 5.093: 99.7159% ( 3) 00:15:28.761 5.093 - 5.120: 99.7255% ( 2) 00:15:28.761 5.147 - 5.173: 99.7303% ( 1) 00:15:28.761 5.173 - 5.200: 99.7351% ( 1) 00:15:28.761 5.200 - 5.227: 99.7400% ( 1) 00:15:28.761 5.360 - 5.387: 99.7544% ( 3) 00:15:28.761 5.387 - 5.413: 99.7592% ( 1) 00:15:28.761 5.440 - 5.467: 99.7640% ( 1) 00:15:28.761 5.493 - 5.520: 99.7689% ( 1) 00:15:28.761 5.547 - 5.573: 99.7785% ( 2) 00:15:28.761 5.600 - 5.627: 99.7929% ( 3) 00:15:28.761 5.627 - 5.653: 99.7977% ( 1) 00:15:28.761 5.653 - 5.680: 99.8122% ( 3) 00:15:28.761 5.707 - 5.733: 99.8170% ( 1) 00:15:28.761 5.867 - 5.893: 99.8363% ( 4) 00:15:28.761 5.893 - 5.920: 99.8411% ( 1) 00:15:28.761 5.973 - 6.000: 99.8459% ( 1) 00:15:28.761 6.080 - 6.107: 99.8507% ( 1) 00:15:28.761 6.107 - 6.133: 99.8603% ( 2) 00:15:28.761 6.160 - 6.187: 99.8700% ( 2) 00:15:28.761 6.267 - 6.293: 99.8748% ( 1) 00:15:28.761 6.373 - 6.400: 99.8844% ( 2) 00:15:28.761 6.453 - 6.480: 99.8892% ( 1) 00:15:28.761 6.507 - 6.533: 99.8941% ( 1) 00:15:28.761 6.640 - 6.667: 99.8989% ( 1) 00:15:28.761 6.693 - 6.720: 99.9037% ( 1) 00:15:28.762 6.827 - 6.880: 99.9085% ( 1) 00:15:28.762 7.253 - 7.307: 99.9133% ( 1) 00:15:28.762 10.880 - 10.933: 99.9181% ( 1) 00:15:28.762 11.840 - 11.893: 99.9230% ( 1) 00:15:28.762 12.747 - 12.800: 99.9278% ( 1) 00:15:28.762 94.720 - 95.147: 99.9326% ( 1) 00:15:28.762 3495.253 - 3522.560: 99.9374% ( 1) 00:15:28.762 3986.773 - 4014.080: 100.0000% ( 13) 00:15:28.762 00:15:28.762 [2024-10-11 11:52:31.097588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.762 Complete histogram 00:15:28.762 ================== 00:15:28.762 Range in us Cumulative Count 00:15:28.762 1.627 - 1.633: 0.0048% ( 1) 00:15:28.762 1.633 - 1.640: 0.4719% ( 97) 00:15:28.762 1.640 - 1.647: 0.8138% ( 71) 00:15:28.762 1.647 - 1.653: 0.8764% ( 13) 00:15:28.762 1.653 - 1.660: 0.9583% ( 17) 00:15:28.762 1.660 - 1.667: 1.0305% ( 15) 00:15:28.762 1.667 - 1.673: 8.4417% ( 1539) 00:15:28.762 1.673 - 1.680: 43.6820% ( 7318) 00:15:28.762 1.680 - 1.687: 52.7208% ( 1877) 00:15:28.762 1.687 - 1.693: 66.1273% ( 2784) 00:15:28.762 1.693 - 1.700: 74.2127% ( 1679) 00:15:28.762 1.700 - 1.707: 78.6189% ( 915) 00:15:28.762 1.707 - 1.720: 83.1455% ( 940) 00:15:28.762 1.720 - 1.733: 84.6624% ( 315) 00:15:28.762 1.733 - 1.747: 88.2259% ( 740) 00:15:28.762 1.747 - 1.760: 93.7446% ( 1146) 00:15:28.762 1.760 - 1.773: 97.3707% ( 753) 00:15:28.762 1.773 - 1.787: 99.0224% ( 343) 00:15:28.762 1.787 - 1.800: 99.4029% ( 79) 00:15:28.762 1.800 - 1.813: 99.4655% ( 13) 00:15:28.762 1.813 - 1.827: 99.4799% ( 3) 00:15:28.762 1.840 - 1.853: 99.4847% ( 1) 00:15:28.762 1.973 - 1.987: 99.4896% ( 1) 00:15:28.762 2.000 - 2.013: 99.4944% ( 1) 00:15:28.762 2.200 - 2.213: 99.4992% ( 1) 00:15:28.762 3.813 - 3.840: 99.5040% ( 1) 00:15:28.762 3.973 - 4.000: 99.5088% ( 1) 00:15:28.762 4.027 - 4.053: 99.5136% ( 1) 00:15:28.762 4.053 - 4.080: 99.5184% ( 1) 00:15:28.762 4.133 - 4.160: 99.5233% ( 1) 00:15:28.762 4.160 - 4.187: 99.5281% ( 1) 00:15:28.762 4.320 - 4.347: 99.5377% ( 2) 00:15:28.762 4.373 - 4.400: 99.5425% ( 1) 00:15:28.762 4.400 - 4.427: 99.5473% ( 1) 00:15:28.762 4.507 - 4.533: 99.5522% ( 1) 00:15:28.762 4.667 - 4.693: 99.5570% ( 1) 00:15:28.762 4.933 - 4.960: 99.5618% ( 1) 00:15:28.762 5.120 - 5.147: 99.5714% ( 2) 00:15:28.762 5.387 - 5.413: 99.5762% ( 1) 00:15:28.762 5.413 - 5.440: 99.5810% ( 1) 00:15:28.762 5.627 - 5.653: 99.5859% ( 1) 00:15:28.762 6.053 - 6.080: 99.5907% ( 1) 00:15:28.762 9.493 - 9.547: 99.5955% ( 1) 00:15:28.762 10.400 - 10.453: 99.6003% ( 1) 00:15:28.762 10.720 - 10.773: 99.6051% ( 1) 00:15:28.762 10.827 - 10.880: 99.6099% ( 1) 00:15:28.762 32.640 - 32.853: 99.6148% ( 1) 00:15:28.762 33.707 - 33.920: 99.6196% ( 1) 00:15:28.762 3986.773 - 4014.080: 99.9952% ( 78) 00:15:28.762 4041.387 - 4068.693: 100.0000% ( 1) 00:15:28.762 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:28.762 [ 00:15:28.762 { 00:15:28.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:28.762 "subtype": "Discovery", 00:15:28.762 "listen_addresses": [], 00:15:28.762 "allow_any_host": true, 00:15:28.762 "hosts": [] 00:15:28.762 }, 00:15:28.762 { 00:15:28.762 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:28.762 "subtype": "NVMe", 00:15:28.762 "listen_addresses": [ 00:15:28.762 { 00:15:28.762 "trtype": "VFIOUSER", 00:15:28.762 "adrfam": "IPv4", 00:15:28.762 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:28.762 "trsvcid": "0" 00:15:28.762 } 00:15:28.762 ], 00:15:28.762 "allow_any_host": true, 00:15:28.762 "hosts": [], 00:15:28.762 "serial_number": "SPDK1", 00:15:28.762 "model_number": "SPDK bdev Controller", 00:15:28.762 "max_namespaces": 32, 00:15:28.762 "min_cntlid": 1, 00:15:28.762 "max_cntlid": 65519, 00:15:28.762 "namespaces": [ 00:15:28.762 { 00:15:28.762 "nsid": 1, 00:15:28.762 "bdev_name": "Malloc1", 00:15:28.762 "name": "Malloc1", 00:15:28.762 "nguid": "6DA437C962864FB08AF3FCA370E5F6AB", 00:15:28.762 "uuid": "6da437c9-6286-4fb0-8af3-fca370e5f6ab" 00:15:28.762 }, 00:15:28.762 { 00:15:28.762 "nsid": 2, 00:15:28.762 "bdev_name": "Malloc3", 00:15:28.762 "name": "Malloc3", 00:15:28.762 "nguid": "BDE7CFAFC3FA4A7FAFD2FCA99FC2B97D", 00:15:28.762 "uuid": "bde7cfaf-c3fa-4a7f-afd2-fca99fc2b97d" 00:15:28.762 } 00:15:28.762 ] 00:15:28.762 }, 00:15:28.762 { 00:15:28.762 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:28.762 "subtype": "NVMe", 00:15:28.762 "listen_addresses": [ 00:15:28.762 { 00:15:28.762 "trtype": "VFIOUSER", 00:15:28.762 "adrfam": "IPv4", 00:15:28.762 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:28.762 "trsvcid": "0" 00:15:28.762 } 00:15:28.762 ], 00:15:28.762 "allow_any_host": true, 00:15:28.762 "hosts": [], 00:15:28.762 "serial_number": "SPDK2", 00:15:28.762 "model_number": "SPDK bdev Controller", 00:15:28.762 "max_namespaces": 32, 00:15:28.762 "min_cntlid": 1, 00:15:28.762 "max_cntlid": 65519, 00:15:28.762 "namespaces": [ 00:15:28.762 { 00:15:28.762 "nsid": 1, 00:15:28.762 "bdev_name": "Malloc2", 00:15:28.762 "name": "Malloc2", 00:15:28.762 "nguid": "AD48E470B2A548BAA993AA5FE45628F9", 00:15:28.762 "uuid": "ad48e470-b2a5-48ba-a993-aa5fe45628f9" 00:15:28.762 } 00:15:28.762 ] 00:15:28.762 } 00:15:28.762 ] 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1887518 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:28.762 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:28.762 [2024-10-11 11:52:31.453439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.024 Malloc4 00:15:29.024 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:29.024 [2024-10-11 11:52:31.679921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.024 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:29.024 Asynchronous Event Request test 00:15:29.024 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.024 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.024 Registering asynchronous event callbacks... 00:15:29.024 Starting namespace attribute notice tests for all controllers... 00:15:29.024 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:29.024 aer_cb - Changed Namespace 00:15:29.024 Cleaning up... 00:15:29.285 [ 00:15:29.285 { 00:15:29.285 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:29.285 "subtype": "Discovery", 00:15:29.285 "listen_addresses": [], 00:15:29.285 "allow_any_host": true, 00:15:29.285 "hosts": [] 00:15:29.285 }, 00:15:29.285 { 00:15:29.285 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:29.285 "subtype": "NVMe", 00:15:29.285 "listen_addresses": [ 00:15:29.285 { 00:15:29.285 "trtype": "VFIOUSER", 00:15:29.285 "adrfam": "IPv4", 00:15:29.285 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:29.286 "trsvcid": "0" 00:15:29.286 } 00:15:29.286 ], 00:15:29.286 "allow_any_host": true, 00:15:29.286 "hosts": [], 00:15:29.286 "serial_number": "SPDK1", 00:15:29.286 "model_number": "SPDK bdev Controller", 00:15:29.286 "max_namespaces": 32, 00:15:29.286 "min_cntlid": 1, 00:15:29.286 "max_cntlid": 65519, 00:15:29.286 "namespaces": [ 00:15:29.286 { 00:15:29.286 "nsid": 1, 00:15:29.286 "bdev_name": "Malloc1", 00:15:29.286 "name": "Malloc1", 00:15:29.286 "nguid": "6DA437C962864FB08AF3FCA370E5F6AB", 00:15:29.286 "uuid": "6da437c9-6286-4fb0-8af3-fca370e5f6ab" 00:15:29.286 }, 00:15:29.286 { 00:15:29.286 "nsid": 2, 00:15:29.286 "bdev_name": "Malloc3", 00:15:29.286 "name": "Malloc3", 00:15:29.286 "nguid": "BDE7CFAFC3FA4A7FAFD2FCA99FC2B97D", 00:15:29.286 "uuid": "bde7cfaf-c3fa-4a7f-afd2-fca99fc2b97d" 00:15:29.286 } 00:15:29.286 ] 00:15:29.286 }, 00:15:29.286 { 00:15:29.286 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:29.286 "subtype": "NVMe", 00:15:29.286 "listen_addresses": [ 00:15:29.286 { 00:15:29.286 "trtype": "VFIOUSER", 00:15:29.286 "adrfam": "IPv4", 00:15:29.286 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:29.286 "trsvcid": "0" 00:15:29.286 } 00:15:29.286 ], 00:15:29.286 "allow_any_host": true, 00:15:29.286 "hosts": [], 00:15:29.286 "serial_number": "SPDK2", 00:15:29.286 "model_number": "SPDK bdev Controller", 00:15:29.286 "max_namespaces": 32, 00:15:29.286 "min_cntlid": 1, 00:15:29.286 "max_cntlid": 65519, 00:15:29.286 "namespaces": [ 00:15:29.286 { 00:15:29.286 "nsid": 1, 00:15:29.286 "bdev_name": "Malloc2", 00:15:29.286 "name": "Malloc2", 00:15:29.286 "nguid": "AD48E470B2A548BAA993AA5FE45628F9", 00:15:29.286 "uuid": "ad48e470-b2a5-48ba-a993-aa5fe45628f9" 00:15:29.286 }, 00:15:29.286 { 00:15:29.286 "nsid": 2, 00:15:29.286 "bdev_name": "Malloc4", 00:15:29.286 "name": "Malloc4", 00:15:29.286 "nguid": "B5A7A56AD2E2493788EBC683FC897A7D", 00:15:29.286 "uuid": "b5a7a56a-d2e2-4937-88eb-c683fc897a7d" 00:15:29.286 } 00:15:29.286 ] 00:15:29.286 } 00:15:29.286 ] 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1887518 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1878434 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1878434 ']' 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1878434 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1878434 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1878434' 00:15:29.286 killing process with pid 1878434 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1878434 00:15:29.286 11:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1878434 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1887551 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1887551' 00:15:29.548 Process pid: 1887551 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1887551 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1887551 ']' 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.548 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:29.548 [2024-10-11 11:52:32.157301] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:29.548 [2024-10-11 11:52:32.158266] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:29.548 [2024-10-11 11:52:32.158312] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.548 [2024-10-11 11:52:32.235256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:29.809 [2024-10-11 11:52:32.264984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.809 [2024-10-11 11:52:32.265015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.809 [2024-10-11 11:52:32.265020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.809 [2024-10-11 11:52:32.265025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.809 [2024-10-11 11:52:32.265030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.809 [2024-10-11 11:52:32.266307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.809 [2024-10-11 11:52:32.266462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.809 [2024-10-11 11:52:32.266614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.809 [2024-10-11 11:52:32.266615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.809 [2024-10-11 11:52:32.317784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:29.809 [2024-10-11 11:52:32.318604] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:29.809 [2024-10-11 11:52:32.319025] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:29.809 [2024-10-11 11:52:32.319834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:29.809 [2024-10-11 11:52:32.319866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:30.380 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.380 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:30.380 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:31.321 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:31.583 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:31.583 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:31.583 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:31.583 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:31.583 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:31.844 Malloc1 00:15:31.844 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:32.105 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:32.105 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:32.365 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:32.365 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:32.365 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:32.624 Malloc2 00:15:32.624 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:32.885 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:32.885 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1887551 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1887551 ']' 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1887551 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1887551 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1887551' 00:15:33.145 killing process with pid 1887551 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1887551 00:15:33.145 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1887551 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:33.405 00:15:33.405 real 0m50.959s 00:15:33.405 user 3m15.201s 00:15:33.405 sys 0m2.746s 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 ************************************ 00:15:33.405 END TEST nvmf_vfio_user 00:15:33.405 ************************************ 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 ************************************ 00:15:33.405 START TEST nvmf_vfio_user_nvme_compliance 00:15:33.405 ************************************ 00:15:33.405 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:33.405 * Looking for test storage... 00:15:33.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:33.405 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:33.405 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:33.405 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:33.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.666 --rc genhtml_branch_coverage=1 00:15:33.666 --rc genhtml_function_coverage=1 00:15:33.666 --rc genhtml_legend=1 00:15:33.666 --rc geninfo_all_blocks=1 00:15:33.666 --rc geninfo_unexecuted_blocks=1 00:15:33.666 00:15:33.666 ' 00:15:33.666 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:33.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.666 --rc genhtml_branch_coverage=1 00:15:33.666 --rc genhtml_function_coverage=1 00:15:33.666 --rc genhtml_legend=1 00:15:33.666 --rc geninfo_all_blocks=1 00:15:33.666 --rc geninfo_unexecuted_blocks=1 00:15:33.666 00:15:33.666 ' 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.667 --rc genhtml_branch_coverage=1 00:15:33.667 --rc genhtml_function_coverage=1 00:15:33.667 --rc genhtml_legend=1 00:15:33.667 --rc geninfo_all_blocks=1 00:15:33.667 --rc geninfo_unexecuted_blocks=1 00:15:33.667 00:15:33.667 ' 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:33.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.667 --rc genhtml_branch_coverage=1 00:15:33.667 --rc genhtml_function_coverage=1 00:15:33.667 --rc genhtml_legend=1 00:15:33.667 --rc geninfo_all_blocks=1 00:15:33.667 --rc geninfo_unexecuted_blocks=1 00:15:33.667 00:15:33.667 ' 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1888586 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1888586' 00:15:33.667 Process pid: 1888586 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1888586 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1888586 ']' 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.667 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:33.667 [2024-10-11 11:52:36.265756] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:33.667 [2024-10-11 11:52:36.265829] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.667 [2024-10-11 11:52:36.342584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:33.926 [2024-10-11 11:52:36.374638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.926 [2024-10-11 11:52:36.374666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.926 [2024-10-11 11:52:36.374672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.926 [2024-10-11 11:52:36.374677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.926 [2024-10-11 11:52:36.374682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.926 [2024-10-11 11:52:36.375856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.926 [2024-10-11 11:52:36.376003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.926 [2024-10-11 11:52:36.376005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.496 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.496 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:34.496 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.437 malloc0 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.437 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.698 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.698 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:35.698 00:15:35.698 00:15:35.698 CUnit - A unit testing framework for C - Version 2.1-3 00:15:35.698 http://cunit.sourceforge.net/ 00:15:35.698 00:15:35.698 00:15:35.698 Suite: nvme_compliance 00:15:35.698 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-11 11:52:38.295718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.698 [2024-10-11 11:52:38.297023] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:35.698 [2024-10-11 11:52:38.297036] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:35.698 [2024-10-11 11:52:38.297040] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:35.698 [2024-10-11 11:52:38.298734] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.698 passed 00:15:35.698 Test: admin_identify_ctrlr_verify_fused ...[2024-10-11 11:52:38.377250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.698 [2024-10-11 11:52:38.380269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.958 passed 00:15:35.958 Test: admin_identify_ns ...[2024-10-11 11:52:38.458832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.958 [2024-10-11 11:52:38.518068] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:35.958 [2024-10-11 11:52:38.526074] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:35.958 [2024-10-11 11:52:38.547154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.958 passed 00:15:35.958 Test: admin_get_features_mandatory_features ...[2024-10-11 11:52:38.620439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.958 [2024-10-11 11:52:38.623462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.958 passed 00:15:36.218 Test: admin_get_features_optional_features ...[2024-10-11 11:52:38.700945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.218 [2024-10-11 11:52:38.703970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.218 passed 00:15:36.218 Test: admin_set_features_number_of_queues ...[2024-10-11 11:52:38.780707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.218 [2024-10-11 11:52:38.885153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.218 passed 00:15:36.478 Test: admin_get_log_page_mandatory_logs ...[2024-10-11 11:52:38.958428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.478 [2024-10-11 11:52:38.961450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.478 passed 00:15:36.478 Test: admin_get_log_page_with_lpo ...[2024-10-11 11:52:39.037197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.478 [2024-10-11 11:52:39.106075] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:36.478 [2024-10-11 11:52:39.119126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.478 passed 00:15:36.738 Test: fabric_property_get ...[2024-10-11 11:52:39.192419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.738 [2024-10-11 11:52:39.193620] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:36.738 [2024-10-11 11:52:39.195443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.738 passed 00:15:36.738 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-11 11:52:39.271898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.738 [2024-10-11 11:52:39.273096] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:36.738 [2024-10-11 11:52:39.274914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.738 passed 00:15:36.738 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-11 11:52:39.351666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.738 [2024-10-11 11:52:39.436075] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:36.999 [2024-10-11 11:52:39.452070] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:36.999 [2024-10-11 11:52:39.457147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.999 passed 00:15:36.999 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-11 11:52:39.530445] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.999 [2024-10-11 11:52:39.531646] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:36.999 [2024-10-11 11:52:39.533458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.999 passed 00:15:36.999 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-11 11:52:39.609188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.999 [2024-10-11 11:52:39.687071] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:37.260 [2024-10-11 11:52:39.711071] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:37.260 [2024-10-11 11:52:39.716143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.260 passed 00:15:37.260 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-11 11:52:39.789391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.260 [2024-10-11 11:52:39.790586] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:37.260 [2024-10-11 11:52:39.790603] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:37.260 [2024-10-11 11:52:39.792406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.260 passed 00:15:37.260 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-11 11:52:39.868096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.260 [2024-10-11 11:52:39.964069] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:37.521 [2024-10-11 11:52:39.972074] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:37.521 [2024-10-11 11:52:39.980073] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:37.521 [2024-10-11 11:52:39.988066] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:37.521 [2024-10-11 11:52:40.017140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.521 passed 00:15:37.521 Test: admin_create_io_sq_verify_pc ...[2024-10-11 11:52:40.091451] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.521 [2024-10-11 11:52:40.108078] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:37.521 [2024-10-11 11:52:40.125608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.521 passed 00:15:37.521 Test: admin_create_io_qp_max_qps ...[2024-10-11 11:52:40.201066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.905 [2024-10-11 11:52:41.300072] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:39.165 [2024-10-11 11:52:41.695664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.165 passed 00:15:39.165 Test: admin_create_io_sq_shared_cq ...[2024-10-11 11:52:41.769428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.426 [2024-10-11 11:52:41.905069] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:39.426 [2024-10-11 11:52:41.942116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.426 passed 00:15:39.426 00:15:39.426 Run Summary: Type Total Ran Passed Failed Inactive 00:15:39.426 suites 1 1 n/a 0 0 00:15:39.426 tests 18 18 18 0 0 00:15:39.426 asserts 360 360 360 0 n/a 00:15:39.426 00:15:39.426 Elapsed time = 1.498 seconds 00:15:39.426 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1888586 00:15:39.426 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1888586 ']' 00:15:39.426 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1888586 00:15:39.426 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:39.426 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.426 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1888586 00:15:39.426 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.426 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.426 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1888586' 00:15:39.426 killing process with pid 1888586 00:15:39.426 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1888586 00:15:39.426 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1888586 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:39.687 00:15:39.687 real 0m6.192s 00:15:39.687 user 0m17.574s 00:15:39.687 sys 0m0.550s 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.687 ************************************ 00:15:39.687 END TEST nvmf_vfio_user_nvme_compliance 00:15:39.687 ************************************ 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.687 ************************************ 00:15:39.687 START TEST nvmf_vfio_user_fuzz 00:15:39.687 ************************************ 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:39.687 * Looking for test storage... 00:15:39.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:39.687 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:39.948 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:39.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.949 --rc genhtml_branch_coverage=1 00:15:39.949 --rc genhtml_function_coverage=1 00:15:39.949 --rc genhtml_legend=1 00:15:39.949 --rc geninfo_all_blocks=1 00:15:39.949 --rc geninfo_unexecuted_blocks=1 00:15:39.949 00:15:39.949 ' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:39.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.949 --rc genhtml_branch_coverage=1 00:15:39.949 --rc genhtml_function_coverage=1 00:15:39.949 --rc genhtml_legend=1 00:15:39.949 --rc geninfo_all_blocks=1 00:15:39.949 --rc geninfo_unexecuted_blocks=1 00:15:39.949 00:15:39.949 ' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:39.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.949 --rc genhtml_branch_coverage=1 00:15:39.949 --rc genhtml_function_coverage=1 00:15:39.949 --rc genhtml_legend=1 00:15:39.949 --rc geninfo_all_blocks=1 00:15:39.949 --rc geninfo_unexecuted_blocks=1 00:15:39.949 00:15:39.949 ' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:39.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.949 --rc genhtml_branch_coverage=1 00:15:39.949 --rc genhtml_function_coverage=1 00:15:39.949 --rc genhtml_legend=1 00:15:39.949 --rc geninfo_all_blocks=1 00:15:39.949 --rc geninfo_unexecuted_blocks=1 00:15:39.949 00:15:39.949 ' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1889705 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1889705' 00:15:39.949 Process pid: 1889705 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1889705 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1889705 ']' 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.949 11:52:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.892 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:40.892 11:52:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.832 malloc0 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:41.832 11:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:14.017 Fuzzing completed. Shutting down the fuzz application 00:16:14.017 00:16:14.017 Dumping successful admin opcodes: 00:16:14.017 8, 9, 10, 24, 00:16:14.017 Dumping successful io opcodes: 00:16:14.017 0, 00:16:14.017 NS: 0x20000081ef00 I/O qp, Total commands completed: 1209918, total successful commands: 4747, random_seed: 793978304 00:16:14.017 NS: 0x20000081ef00 admin qp, Total commands completed: 247427, total successful commands: 1999, random_seed: 3634963136 00:16:14.017 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:14.017 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.017 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:14.017 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.017 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1889705 00:16:14.017 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1889705 ']' 00:16:14.017 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1889705 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1889705 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1889705' 00:16:14.018 killing process with pid 1889705 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1889705 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1889705 00:16:14.018 11:53:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:14.018 00:16:14.018 real 0m32.779s 00:16:14.018 user 0m34.882s 00:16:14.018 sys 0m26.102s 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:14.018 ************************************ 00:16:14.018 END TEST nvmf_vfio_user_fuzz 00:16:14.018 ************************************ 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:14.018 ************************************ 00:16:14.018 START TEST nvmf_auth_target 00:16:14.018 ************************************ 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:14.018 * Looking for test storage... 00:16:14.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:14.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.018 --rc genhtml_branch_coverage=1 00:16:14.018 --rc genhtml_function_coverage=1 00:16:14.018 --rc genhtml_legend=1 00:16:14.018 --rc geninfo_all_blocks=1 00:16:14.018 --rc geninfo_unexecuted_blocks=1 00:16:14.018 00:16:14.018 ' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:14.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.018 --rc genhtml_branch_coverage=1 00:16:14.018 --rc genhtml_function_coverage=1 00:16:14.018 --rc genhtml_legend=1 00:16:14.018 --rc geninfo_all_blocks=1 00:16:14.018 --rc geninfo_unexecuted_blocks=1 00:16:14.018 00:16:14.018 ' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:14.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.018 --rc genhtml_branch_coverage=1 00:16:14.018 --rc genhtml_function_coverage=1 00:16:14.018 --rc genhtml_legend=1 00:16:14.018 --rc geninfo_all_blocks=1 00:16:14.018 --rc geninfo_unexecuted_blocks=1 00:16:14.018 00:16:14.018 ' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:14.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.018 --rc genhtml_branch_coverage=1 00:16:14.018 --rc genhtml_function_coverage=1 00:16:14.018 --rc genhtml_legend=1 00:16:14.018 --rc geninfo_all_blocks=1 00:16:14.018 --rc geninfo_unexecuted_blocks=1 00:16:14.018 00:16:14.018 ' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.018 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:14.019 11:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.612 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:20.613 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:20.613 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:20.613 Found net devices under 0000:31:00.0: cvl_0_0 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:20.613 Found net devices under 0000:31:00.1: cvl_0_1 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:20.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:16:20.613 00:16:20.613 --- 10.0.0.2 ping statistics --- 00:16:20.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.613 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:16:20.613 00:16:20.613 --- 10.0.0.1 ping statistics --- 00:16:20.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.613 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:20.613 11:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1900051 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1900051 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1900051 ']' 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.613 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.614 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.614 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.614 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.187 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1900090 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:21.449 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=47703944794d8985669deafea3fa581175d7db5b2ebb1df1 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.jt4 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 47703944794d8985669deafea3fa581175d7db5b2ebb1df1 0 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 47703944794d8985669deafea3fa581175d7db5b2ebb1df1 0 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=47703944794d8985669deafea3fa581175d7db5b2ebb1df1 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:21.450 11:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.jt4 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.jt4 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.jt4 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c08b063dec2eae9861b622629366b32b76deb466352b345a958b8221a1c13b11 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.D4h 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c08b063dec2eae9861b622629366b32b76deb466352b345a958b8221a1c13b11 3 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c08b063dec2eae9861b622629366b32b76deb466352b345a958b8221a1c13b11 3 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c08b063dec2eae9861b622629366b32b76deb466352b345a958b8221a1c13b11 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.D4h 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.D4h 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.D4h 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2788ac8f3b3648df0bd31cf045ea4c86 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.1Z7 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2788ac8f3b3648df0bd31cf045ea4c86 1 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2788ac8f3b3648df0bd31cf045ea4c86 1 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2788ac8f3b3648df0bd31cf045ea4c86 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.1Z7 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.1Z7 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.1Z7 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:21.450 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8267e72e8ab7dd28a7709e74f23a644c01ba17fa52f0a8bd 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Fst 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8267e72e8ab7dd28a7709e74f23a644c01ba17fa52f0a8bd 2 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8267e72e8ab7dd28a7709e74f23a644c01ba17fa52f0a8bd 2 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8267e72e8ab7dd28a7709e74f23a644c01ba17fa52f0a8bd 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Fst 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Fst 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Fst 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4f3b8a35ec492cd7c7209866517713ccb253ebabfbd4dcd5 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.AW7 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4f3b8a35ec492cd7c7209866517713ccb253ebabfbd4dcd5 2 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4f3b8a35ec492cd7c7209866517713ccb253ebabfbd4dcd5 2 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4f3b8a35ec492cd7c7209866517713ccb253ebabfbd4dcd5 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.AW7 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.AW7 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.AW7 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=13367674f14216f73c9798f146bb0b4a 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.SE8 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 13367674f14216f73c9798f146bb0b4a 1 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 13367674f14216f73c9798f146bb0b4a 1 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=13367674f14216f73c9798f146bb0b4a 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.SE8 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.SE8 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.SE8 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=976efbe0abcf353bd0e88aff015fd65263ddd0c315fc0920a371f3502d527e0f 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.7sl 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 976efbe0abcf353bd0e88aff015fd65263ddd0c315fc0920a371f3502d527e0f 3 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 976efbe0abcf353bd0e88aff015fd65263ddd0c315fc0920a371f3502d527e0f 3 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=976efbe0abcf353bd0e88aff015fd65263ddd0c315fc0920a371f3502d527e0f 00:16:21.712 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:21.713 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.7sl 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.7sl 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.7sl 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1900051 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1900051 ']' 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1900090 /var/tmp/host.sock 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1900090 ']' 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.974 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jt4 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jt4 00:16:22.236 11:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jt4 00:16:22.497 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.D4h ]] 00:16:22.497 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D4h 00:16:22.497 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.497 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.497 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.497 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D4h 00:16:22.497 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D4h 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.1Z7 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.1Z7 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.1Z7 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Fst ]] 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fst 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.758 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.759 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.759 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fst 00:16:22.759 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fst 00:16:23.020 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:23.020 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AW7 00:16:23.020 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.020 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.020 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.020 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AW7 00:16:23.020 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AW7 00:16:23.282 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.SE8 ]] 00:16:23.282 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SE8 00:16:23.282 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.282 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.282 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.282 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SE8 00:16:23.282 11:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SE8 00:16:23.543 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:23.543 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7sl 00:16:23.543 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.543 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.543 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.543 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.7sl 00:16:23.543 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.7sl 00:16:23.544 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:23.544 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:23.544 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.544 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.544 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.544 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.806 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.067 00:16:24.067 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.067 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.067 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.329 { 00:16:24.329 "cntlid": 1, 00:16:24.329 "qid": 0, 00:16:24.329 "state": "enabled", 00:16:24.329 "thread": "nvmf_tgt_poll_group_000", 00:16:24.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:24.329 "listen_address": { 00:16:24.329 "trtype": "TCP", 00:16:24.329 "adrfam": "IPv4", 00:16:24.329 "traddr": "10.0.0.2", 00:16:24.329 "trsvcid": "4420" 00:16:24.329 }, 00:16:24.329 "peer_address": { 00:16:24.329 "trtype": "TCP", 00:16:24.329 "adrfam": "IPv4", 00:16:24.329 "traddr": "10.0.0.1", 00:16:24.329 "trsvcid": "58814" 00:16:24.329 }, 00:16:24.329 "auth": { 00:16:24.329 "state": "completed", 00:16:24.329 "digest": "sha256", 00:16:24.329 "dhgroup": "null" 00:16:24.329 } 00:16:24.329 } 00:16:24.329 ]' 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.329 11:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.329 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.329 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.329 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.590 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:24.590 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:25.161 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.423 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:25.423 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.423 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.423 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.423 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.423 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.423 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.423 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.684 00:16:25.684 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.684 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.684 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.945 { 00:16:25.945 "cntlid": 3, 00:16:25.945 "qid": 0, 00:16:25.945 "state": "enabled", 00:16:25.945 "thread": "nvmf_tgt_poll_group_000", 00:16:25.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:25.945 "listen_address": { 00:16:25.945 "trtype": "TCP", 00:16:25.945 "adrfam": "IPv4", 00:16:25.945 "traddr": "10.0.0.2", 00:16:25.945 "trsvcid": "4420" 00:16:25.945 }, 00:16:25.945 "peer_address": { 00:16:25.945 "trtype": "TCP", 00:16:25.945 "adrfam": "IPv4", 00:16:25.945 "traddr": "10.0.0.1", 00:16:25.945 "trsvcid": "58830" 00:16:25.945 }, 00:16:25.945 "auth": { 00:16:25.945 "state": "completed", 00:16:25.945 "digest": "sha256", 00:16:25.945 "dhgroup": "null" 00:16:25.945 } 00:16:25.945 } 00:16:25.945 ]' 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.945 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.217 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:26.217 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:26.859 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.859 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.859 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.859 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.859 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.859 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.859 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.859 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.150 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.150 00:16:27.410 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.410 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.410 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.410 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.410 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.411 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.411 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.411 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.411 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.411 { 00:16:27.411 "cntlid": 5, 00:16:27.411 "qid": 0, 00:16:27.411 "state": "enabled", 00:16:27.411 "thread": "nvmf_tgt_poll_group_000", 00:16:27.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:27.411 "listen_address": { 00:16:27.411 "trtype": "TCP", 00:16:27.411 "adrfam": "IPv4", 00:16:27.411 "traddr": "10.0.0.2", 00:16:27.411 "trsvcid": "4420" 00:16:27.411 }, 00:16:27.411 "peer_address": { 00:16:27.411 "trtype": "TCP", 00:16:27.411 "adrfam": "IPv4", 00:16:27.411 "traddr": "10.0.0.1", 00:16:27.411 "trsvcid": "58860" 00:16:27.411 }, 00:16:27.411 "auth": { 00:16:27.411 "state": "completed", 00:16:27.411 "digest": "sha256", 00:16:27.411 "dhgroup": "null" 00:16:27.411 } 00:16:27.411 } 00:16:27.411 ]' 00:16:27.411 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.411 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.411 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.671 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.671 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.671 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.671 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.671 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.932 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:27.932 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:28.503 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.503 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:28.503 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.503 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.503 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.503 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.503 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.503 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.764 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.764 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.026 { 00:16:29.026 "cntlid": 7, 00:16:29.026 "qid": 0, 00:16:29.026 "state": "enabled", 00:16:29.026 "thread": "nvmf_tgt_poll_group_000", 00:16:29.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:29.026 "listen_address": { 00:16:29.026 "trtype": "TCP", 00:16:29.026 "adrfam": "IPv4", 00:16:29.026 "traddr": "10.0.0.2", 00:16:29.026 "trsvcid": "4420" 00:16:29.026 }, 00:16:29.026 "peer_address": { 00:16:29.026 "trtype": "TCP", 00:16:29.026 "adrfam": "IPv4", 00:16:29.026 "traddr": "10.0.0.1", 00:16:29.026 "trsvcid": "39990" 00:16:29.026 }, 00:16:29.026 "auth": { 00:16:29.026 "state": "completed", 00:16:29.026 "digest": "sha256", 00:16:29.026 "dhgroup": "null" 00:16:29.026 } 00:16:29.026 } 00:16:29.026 ]' 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.026 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.287 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.287 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.287 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.287 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.287 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.287 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:29.287 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:30.228 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.228 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:30.228 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.228 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.228 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.229 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.490 00:16:30.490 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.490 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.490 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.751 { 00:16:30.751 "cntlid": 9, 00:16:30.751 "qid": 0, 00:16:30.751 "state": "enabled", 00:16:30.751 "thread": "nvmf_tgt_poll_group_000", 00:16:30.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:30.751 "listen_address": { 00:16:30.751 "trtype": "TCP", 00:16:30.751 "adrfam": "IPv4", 00:16:30.751 "traddr": "10.0.0.2", 00:16:30.751 "trsvcid": "4420" 00:16:30.751 }, 00:16:30.751 "peer_address": { 00:16:30.751 "trtype": "TCP", 00:16:30.751 "adrfam": "IPv4", 00:16:30.751 "traddr": "10.0.0.1", 00:16:30.751 "trsvcid": "40008" 00:16:30.751 }, 00:16:30.751 "auth": { 00:16:30.751 "state": "completed", 00:16:30.751 "digest": "sha256", 00:16:30.751 "dhgroup": "ffdhe2048" 00:16:30.751 } 00:16:30.751 } 00:16:30.751 ]' 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.751 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.012 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:31.012 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:31.581 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.581 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.581 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.581 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.581 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.581 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.581 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.581 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.841 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.101 00:16:32.101 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.102 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.102 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.362 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.362 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.362 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.362 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.362 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.362 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.362 { 00:16:32.362 "cntlid": 11, 00:16:32.362 "qid": 0, 00:16:32.362 "state": "enabled", 00:16:32.362 "thread": "nvmf_tgt_poll_group_000", 00:16:32.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:32.362 "listen_address": { 00:16:32.362 "trtype": "TCP", 00:16:32.362 "adrfam": "IPv4", 00:16:32.362 "traddr": "10.0.0.2", 00:16:32.362 "trsvcid": "4420" 00:16:32.362 }, 00:16:32.362 "peer_address": { 00:16:32.362 "trtype": "TCP", 00:16:32.362 "adrfam": "IPv4", 00:16:32.362 "traddr": "10.0.0.1", 00:16:32.362 "trsvcid": "40052" 00:16:32.362 }, 00:16:32.362 "auth": { 00:16:32.362 "state": "completed", 00:16:32.362 "digest": "sha256", 00:16:32.362 "dhgroup": "ffdhe2048" 00:16:32.362 } 00:16:32.362 } 00:16:32.362 ]' 00:16:32.363 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.363 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.363 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.363 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.363 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.363 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.363 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.363 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.623 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:32.623 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:33.193 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.194 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.194 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.194 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.194 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.194 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.194 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.194 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.455 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.716 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.716 { 00:16:33.716 "cntlid": 13, 00:16:33.716 "qid": 0, 00:16:33.716 "state": "enabled", 00:16:33.716 "thread": "nvmf_tgt_poll_group_000", 00:16:33.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:33.716 "listen_address": { 00:16:33.716 "trtype": "TCP", 00:16:33.716 "adrfam": "IPv4", 00:16:33.716 "traddr": "10.0.0.2", 00:16:33.716 "trsvcid": "4420" 00:16:33.716 }, 00:16:33.716 "peer_address": { 00:16:33.716 "trtype": "TCP", 00:16:33.716 "adrfam": "IPv4", 00:16:33.716 "traddr": "10.0.0.1", 00:16:33.716 "trsvcid": "40084" 00:16:33.716 }, 00:16:33.716 "auth": { 00:16:33.716 "state": "completed", 00:16:33.716 "digest": "sha256", 00:16:33.716 "dhgroup": "ffdhe2048" 00:16:33.716 } 00:16:33.716 } 00:16:33.716 ]' 00:16:33.716 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.977 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.977 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.977 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.977 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.977 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.978 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.978 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.238 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:34.238 11:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:34.810 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.810 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:34.810 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.810 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.810 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.810 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.810 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.810 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.070 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:35.070 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.070 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.071 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.332 00:16:35.333 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.333 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.333 11:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.333 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.333 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.333 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.333 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.333 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.595 { 00:16:35.595 "cntlid": 15, 00:16:35.595 "qid": 0, 00:16:35.595 "state": "enabled", 00:16:35.595 "thread": "nvmf_tgt_poll_group_000", 00:16:35.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:35.595 "listen_address": { 00:16:35.595 "trtype": "TCP", 00:16:35.595 "adrfam": "IPv4", 00:16:35.595 "traddr": "10.0.0.2", 00:16:35.595 "trsvcid": "4420" 00:16:35.595 }, 00:16:35.595 "peer_address": { 00:16:35.595 "trtype": "TCP", 00:16:35.595 "adrfam": "IPv4", 00:16:35.595 "traddr": "10.0.0.1", 00:16:35.595 "trsvcid": "40126" 00:16:35.595 }, 00:16:35.595 "auth": { 00:16:35.595 "state": "completed", 00:16:35.595 "digest": "sha256", 00:16:35.595 "dhgroup": "ffdhe2048" 00:16:35.595 } 00:16:35.595 } 00:16:35.595 ]' 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.595 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.856 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:35.856 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:36.427 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.427 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.427 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.427 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.427 11:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.427 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.427 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.427 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.427 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.688 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.949 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.949 { 00:16:36.949 "cntlid": 17, 00:16:36.949 "qid": 0, 00:16:36.949 "state": "enabled", 00:16:36.949 "thread": "nvmf_tgt_poll_group_000", 00:16:36.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:36.949 "listen_address": { 00:16:36.949 "trtype": "TCP", 00:16:36.949 "adrfam": "IPv4", 00:16:36.949 "traddr": "10.0.0.2", 00:16:36.949 "trsvcid": "4420" 00:16:36.949 }, 00:16:36.949 "peer_address": { 00:16:36.949 "trtype": "TCP", 00:16:36.949 "adrfam": "IPv4", 00:16:36.949 "traddr": "10.0.0.1", 00:16:36.949 "trsvcid": "40158" 00:16:36.949 }, 00:16:36.949 "auth": { 00:16:36.949 "state": "completed", 00:16:36.949 "digest": "sha256", 00:16:36.949 "dhgroup": "ffdhe3072" 00:16:36.949 } 00:16:36.949 } 00:16:36.949 ]' 00:16:36.949 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.209 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.209 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.209 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.209 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.209 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.209 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.209 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.468 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:37.468 11:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:38.037 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.037 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:38.037 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.037 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.037 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.037 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.037 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.037 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.297 11:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.556 00:16:38.556 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.556 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.556 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.816 { 00:16:38.816 "cntlid": 19, 00:16:38.816 "qid": 0, 00:16:38.816 "state": "enabled", 00:16:38.816 "thread": "nvmf_tgt_poll_group_000", 00:16:38.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:38.816 "listen_address": { 00:16:38.816 "trtype": "TCP", 00:16:38.816 "adrfam": "IPv4", 00:16:38.816 "traddr": "10.0.0.2", 00:16:38.816 "trsvcid": "4420" 00:16:38.816 }, 00:16:38.816 "peer_address": { 00:16:38.816 "trtype": "TCP", 00:16:38.816 "adrfam": "IPv4", 00:16:38.816 "traddr": "10.0.0.1", 00:16:38.816 "trsvcid": "37768" 00:16:38.816 }, 00:16:38.816 "auth": { 00:16:38.816 "state": "completed", 00:16:38.816 "digest": "sha256", 00:16:38.816 "dhgroup": "ffdhe3072" 00:16:38.816 } 00:16:38.816 } 00:16:38.816 ]' 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.816 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.076 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:39.076 11:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:39.645 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.645 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:39.645 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.645 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.645 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.645 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.645 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.645 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.904 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.163 00:16:40.163 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.163 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.163 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.422 { 00:16:40.422 "cntlid": 21, 00:16:40.422 "qid": 0, 00:16:40.422 "state": "enabled", 00:16:40.422 "thread": "nvmf_tgt_poll_group_000", 00:16:40.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:40.422 "listen_address": { 00:16:40.422 "trtype": "TCP", 00:16:40.422 "adrfam": "IPv4", 00:16:40.422 "traddr": "10.0.0.2", 00:16:40.422 "trsvcid": "4420" 00:16:40.422 }, 00:16:40.422 "peer_address": { 00:16:40.422 "trtype": "TCP", 00:16:40.422 "adrfam": "IPv4", 00:16:40.422 "traddr": "10.0.0.1", 00:16:40.422 "trsvcid": "37794" 00:16:40.422 }, 00:16:40.422 "auth": { 00:16:40.422 "state": "completed", 00:16:40.422 "digest": "sha256", 00:16:40.422 "dhgroup": "ffdhe3072" 00:16:40.422 } 00:16:40.422 } 00:16:40.422 ]' 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.422 11:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.422 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.422 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.422 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.422 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.422 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.682 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:40.682 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:41.252 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.252 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.252 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.252 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.252 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.252 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.252 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.252 11:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.511 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.770 00:16:41.770 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.770 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.770 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.030 { 00:16:42.030 "cntlid": 23, 00:16:42.030 "qid": 0, 00:16:42.030 "state": "enabled", 00:16:42.030 "thread": "nvmf_tgt_poll_group_000", 00:16:42.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:42.030 "listen_address": { 00:16:42.030 "trtype": "TCP", 00:16:42.030 "adrfam": "IPv4", 00:16:42.030 "traddr": "10.0.0.2", 00:16:42.030 "trsvcid": "4420" 00:16:42.030 }, 00:16:42.030 "peer_address": { 00:16:42.030 "trtype": "TCP", 00:16:42.030 "adrfam": "IPv4", 00:16:42.030 "traddr": "10.0.0.1", 00:16:42.030 "trsvcid": "37816" 00:16:42.030 }, 00:16:42.030 "auth": { 00:16:42.030 "state": "completed", 00:16:42.030 "digest": "sha256", 00:16:42.030 "dhgroup": "ffdhe3072" 00:16:42.030 } 00:16:42.030 } 00:16:42.030 ]' 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.030 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.290 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:42.290 11:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.861 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.121 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:43.121 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.121 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.121 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.122 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.385 00:16:43.385 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.385 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.385 11:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.645 { 00:16:43.645 "cntlid": 25, 00:16:43.645 "qid": 0, 00:16:43.645 "state": "enabled", 00:16:43.645 "thread": "nvmf_tgt_poll_group_000", 00:16:43.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:43.645 "listen_address": { 00:16:43.645 "trtype": "TCP", 00:16:43.645 "adrfam": "IPv4", 00:16:43.645 "traddr": "10.0.0.2", 00:16:43.645 "trsvcid": "4420" 00:16:43.645 }, 00:16:43.645 "peer_address": { 00:16:43.645 "trtype": "TCP", 00:16:43.645 "adrfam": "IPv4", 00:16:43.645 "traddr": "10.0.0.1", 00:16:43.645 "trsvcid": "37840" 00:16:43.645 }, 00:16:43.645 "auth": { 00:16:43.645 "state": "completed", 00:16:43.645 "digest": "sha256", 00:16:43.645 "dhgroup": "ffdhe4096" 00:16:43.645 } 00:16:43.645 } 00:16:43.645 ]' 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.645 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.905 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:43.905 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:44.475 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.476 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:44.476 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.476 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.476 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.476 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.476 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.476 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.736 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.996 00:16:44.996 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.996 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.996 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.256 { 00:16:45.256 "cntlid": 27, 00:16:45.256 "qid": 0, 00:16:45.256 "state": "enabled", 00:16:45.256 "thread": "nvmf_tgt_poll_group_000", 00:16:45.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:45.256 "listen_address": { 00:16:45.256 "trtype": "TCP", 00:16:45.256 "adrfam": "IPv4", 00:16:45.256 "traddr": "10.0.0.2", 00:16:45.256 "trsvcid": "4420" 00:16:45.256 }, 00:16:45.256 "peer_address": { 00:16:45.256 "trtype": "TCP", 00:16:45.256 "adrfam": "IPv4", 00:16:45.256 "traddr": "10.0.0.1", 00:16:45.256 "trsvcid": "37864" 00:16:45.256 }, 00:16:45.256 "auth": { 00:16:45.256 "state": "completed", 00:16:45.256 "digest": "sha256", 00:16:45.256 "dhgroup": "ffdhe4096" 00:16:45.256 } 00:16:45.256 } 00:16:45.256 ]' 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.256 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.517 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:45.517 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:46.089 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.089 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:46.089 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.089 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.089 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.089 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.089 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.089 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.349 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.610 00:16:46.610 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.610 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.610 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.871 { 00:16:46.871 "cntlid": 29, 00:16:46.871 "qid": 0, 00:16:46.871 "state": "enabled", 00:16:46.871 "thread": "nvmf_tgt_poll_group_000", 00:16:46.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:46.871 "listen_address": { 00:16:46.871 "trtype": "TCP", 00:16:46.871 "adrfam": "IPv4", 00:16:46.871 "traddr": "10.0.0.2", 00:16:46.871 "trsvcid": "4420" 00:16:46.871 }, 00:16:46.871 "peer_address": { 00:16:46.871 "trtype": "TCP", 00:16:46.871 "adrfam": "IPv4", 00:16:46.871 "traddr": "10.0.0.1", 00:16:46.871 "trsvcid": "37896" 00:16:46.871 }, 00:16:46.871 "auth": { 00:16:46.871 "state": "completed", 00:16:46.871 "digest": "sha256", 00:16:46.871 "dhgroup": "ffdhe4096" 00:16:46.871 } 00:16:46.871 } 00:16:46.871 ]' 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.871 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.132 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:47.132 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.073 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.074 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.335 00:16:48.335 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.335 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.335 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.595 { 00:16:48.595 "cntlid": 31, 00:16:48.595 "qid": 0, 00:16:48.595 "state": "enabled", 00:16:48.595 "thread": "nvmf_tgt_poll_group_000", 00:16:48.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:48.595 "listen_address": { 00:16:48.595 "trtype": "TCP", 00:16:48.595 "adrfam": "IPv4", 00:16:48.595 "traddr": "10.0.0.2", 00:16:48.595 "trsvcid": "4420" 00:16:48.595 }, 00:16:48.595 "peer_address": { 00:16:48.595 "trtype": "TCP", 00:16:48.595 "adrfam": "IPv4", 00:16:48.595 "traddr": "10.0.0.1", 00:16:48.595 "trsvcid": "53776" 00:16:48.595 }, 00:16:48.595 "auth": { 00:16:48.595 "state": "completed", 00:16:48.595 "digest": "sha256", 00:16:48.595 "dhgroup": "ffdhe4096" 00:16:48.595 } 00:16:48.595 } 00:16:48.595 ]' 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.595 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.855 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:48.855 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.426 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.687 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.948 00:16:49.948 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.948 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.948 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.208 { 00:16:50.208 "cntlid": 33, 00:16:50.208 "qid": 0, 00:16:50.208 "state": "enabled", 00:16:50.208 "thread": "nvmf_tgt_poll_group_000", 00:16:50.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:50.208 "listen_address": { 00:16:50.208 "trtype": "TCP", 00:16:50.208 "adrfam": "IPv4", 00:16:50.208 "traddr": "10.0.0.2", 00:16:50.208 "trsvcid": "4420" 00:16:50.208 }, 00:16:50.208 "peer_address": { 00:16:50.208 "trtype": "TCP", 00:16:50.208 "adrfam": "IPv4", 00:16:50.208 "traddr": "10.0.0.1", 00:16:50.208 "trsvcid": "53802" 00:16:50.208 }, 00:16:50.208 "auth": { 00:16:50.208 "state": "completed", 00:16:50.208 "digest": "sha256", 00:16:50.208 "dhgroup": "ffdhe6144" 00:16:50.208 } 00:16:50.208 } 00:16:50.208 ]' 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.208 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.468 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:50.468 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:51.040 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.301 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.562 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.822 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.822 { 00:16:51.822 "cntlid": 35, 00:16:51.822 "qid": 0, 00:16:51.822 "state": "enabled", 00:16:51.822 "thread": "nvmf_tgt_poll_group_000", 00:16:51.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:51.822 "listen_address": { 00:16:51.822 "trtype": "TCP", 00:16:51.822 "adrfam": "IPv4", 00:16:51.822 "traddr": "10.0.0.2", 00:16:51.822 "trsvcid": "4420" 00:16:51.822 }, 00:16:51.822 "peer_address": { 00:16:51.822 "trtype": "TCP", 00:16:51.822 "adrfam": "IPv4", 00:16:51.822 "traddr": "10.0.0.1", 00:16:51.822 "trsvcid": "53836" 00:16:51.822 }, 00:16:51.822 "auth": { 00:16:51.822 "state": "completed", 00:16:51.822 "digest": "sha256", 00:16:51.822 "dhgroup": "ffdhe6144" 00:16:51.822 } 00:16:51.822 } 00:16:51.822 ]' 00:16:51.823 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.823 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.823 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.083 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.083 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.083 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.083 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.083 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.083 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:52.083 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.026 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.290 00:16:53.290 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.290 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.290 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.552 { 00:16:53.552 "cntlid": 37, 00:16:53.552 "qid": 0, 00:16:53.552 "state": "enabled", 00:16:53.552 "thread": "nvmf_tgt_poll_group_000", 00:16:53.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:53.552 "listen_address": { 00:16:53.552 "trtype": "TCP", 00:16:53.552 "adrfam": "IPv4", 00:16:53.552 "traddr": "10.0.0.2", 00:16:53.552 "trsvcid": "4420" 00:16:53.552 }, 00:16:53.552 "peer_address": { 00:16:53.552 "trtype": "TCP", 00:16:53.552 "adrfam": "IPv4", 00:16:53.552 "traddr": "10.0.0.1", 00:16:53.552 "trsvcid": "53862" 00:16:53.552 }, 00:16:53.552 "auth": { 00:16:53.552 "state": "completed", 00:16:53.552 "digest": "sha256", 00:16:53.552 "dhgroup": "ffdhe6144" 00:16:53.552 } 00:16:53.552 } 00:16:53.552 ]' 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.552 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.813 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.813 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.813 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.813 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:53.813 11:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:54.755 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.756 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.017 00:16:55.017 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.017 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.017 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.278 { 00:16:55.278 "cntlid": 39, 00:16:55.278 "qid": 0, 00:16:55.278 "state": "enabled", 00:16:55.278 "thread": "nvmf_tgt_poll_group_000", 00:16:55.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:55.278 "listen_address": { 00:16:55.278 "trtype": "TCP", 00:16:55.278 "adrfam": "IPv4", 00:16:55.278 "traddr": "10.0.0.2", 00:16:55.278 "trsvcid": "4420" 00:16:55.278 }, 00:16:55.278 "peer_address": { 00:16:55.278 "trtype": "TCP", 00:16:55.278 "adrfam": "IPv4", 00:16:55.278 "traddr": "10.0.0.1", 00:16:55.278 "trsvcid": "53886" 00:16:55.278 }, 00:16:55.278 "auth": { 00:16:55.278 "state": "completed", 00:16:55.278 "digest": "sha256", 00:16:55.278 "dhgroup": "ffdhe6144" 00:16:55.278 } 00:16:55.278 } 00:16:55.278 ]' 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.278 11:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.539 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:55.539 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:16:56.109 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.109 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.109 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.109 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.110 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.110 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.110 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.110 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.110 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.370 11:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.370 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.370 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.370 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.370 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.942 00:16:56.942 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.942 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.942 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.202 { 00:16:57.202 "cntlid": 41, 00:16:57.202 "qid": 0, 00:16:57.202 "state": "enabled", 00:16:57.202 "thread": "nvmf_tgt_poll_group_000", 00:16:57.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:57.202 "listen_address": { 00:16:57.202 "trtype": "TCP", 00:16:57.202 "adrfam": "IPv4", 00:16:57.202 "traddr": "10.0.0.2", 00:16:57.202 "trsvcid": "4420" 00:16:57.202 }, 00:16:57.202 "peer_address": { 00:16:57.202 "trtype": "TCP", 00:16:57.202 "adrfam": "IPv4", 00:16:57.202 "traddr": "10.0.0.1", 00:16:57.202 "trsvcid": "53918" 00:16:57.202 }, 00:16:57.202 "auth": { 00:16:57.202 "state": "completed", 00:16:57.202 "digest": "sha256", 00:16:57.202 "dhgroup": "ffdhe8192" 00:16:57.202 } 00:16:57.202 } 00:16:57.202 ]' 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.202 11:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.464 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:57.464 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:16:58.036 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.036 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:58.036 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.036 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.036 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.036 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.036 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.036 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.355 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.356 11:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.927 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.927 { 00:16:58.927 "cntlid": 43, 00:16:58.927 "qid": 0, 00:16:58.927 "state": "enabled", 00:16:58.927 "thread": "nvmf_tgt_poll_group_000", 00:16:58.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:58.927 "listen_address": { 00:16:58.927 "trtype": "TCP", 00:16:58.927 "adrfam": "IPv4", 00:16:58.927 "traddr": "10.0.0.2", 00:16:58.927 "trsvcid": "4420" 00:16:58.927 }, 00:16:58.927 "peer_address": { 00:16:58.927 "trtype": "TCP", 00:16:58.927 "adrfam": "IPv4", 00:16:58.927 "traddr": "10.0.0.1", 00:16:58.927 "trsvcid": "42400" 00:16:58.927 }, 00:16:58.927 "auth": { 00:16:58.927 "state": "completed", 00:16:58.927 "digest": "sha256", 00:16:58.927 "dhgroup": "ffdhe8192" 00:16:58.927 } 00:16:58.927 } 00:16:58.927 ]' 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.927 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.188 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.188 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.188 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.188 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.188 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.188 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:16:59.188 11:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.131 11:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.701 00:17:00.701 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.701 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.701 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.701 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.701 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.701 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.701 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.961 { 00:17:00.961 "cntlid": 45, 00:17:00.961 "qid": 0, 00:17:00.961 "state": "enabled", 00:17:00.961 "thread": "nvmf_tgt_poll_group_000", 00:17:00.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:00.961 "listen_address": { 00:17:00.961 "trtype": "TCP", 00:17:00.961 "adrfam": "IPv4", 00:17:00.961 "traddr": "10.0.0.2", 00:17:00.961 "trsvcid": "4420" 00:17:00.961 }, 00:17:00.961 "peer_address": { 00:17:00.961 "trtype": "TCP", 00:17:00.961 "adrfam": "IPv4", 00:17:00.961 "traddr": "10.0.0.1", 00:17:00.961 "trsvcid": "42420" 00:17:00.961 }, 00:17:00.961 "auth": { 00:17:00.961 "state": "completed", 00:17:00.961 "digest": "sha256", 00:17:00.961 "dhgroup": "ffdhe8192" 00:17:00.961 } 00:17:00.961 } 00:17:00.961 ]' 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.961 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.221 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:01.221 11:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:01.791 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.791 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.791 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.791 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.791 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.791 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.791 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.791 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.051 11:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.621 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.621 { 00:17:02.621 "cntlid": 47, 00:17:02.621 "qid": 0, 00:17:02.621 "state": "enabled", 00:17:02.621 "thread": "nvmf_tgt_poll_group_000", 00:17:02.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:02.621 "listen_address": { 00:17:02.621 "trtype": "TCP", 00:17:02.621 "adrfam": "IPv4", 00:17:02.621 "traddr": "10.0.0.2", 00:17:02.621 "trsvcid": "4420" 00:17:02.621 }, 00:17:02.621 "peer_address": { 00:17:02.621 "trtype": "TCP", 00:17:02.621 "adrfam": "IPv4", 00:17:02.621 "traddr": "10.0.0.1", 00:17:02.621 "trsvcid": "42436" 00:17:02.621 }, 00:17:02.621 "auth": { 00:17:02.621 "state": "completed", 00:17:02.621 "digest": "sha256", 00:17:02.621 "dhgroup": "ffdhe8192" 00:17:02.621 } 00:17:02.621 } 00:17:02.621 ]' 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.621 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.882 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.882 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.882 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.882 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.882 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.143 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:03.143 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:03.721 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.721 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.722 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.722 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.722 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.722 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:03.722 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.722 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.722 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.722 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.981 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:03.981 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.981 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.981 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:03.981 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.981 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.981 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.982 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.982 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.242 { 00:17:04.242 "cntlid": 49, 00:17:04.242 "qid": 0, 00:17:04.242 "state": "enabled", 00:17:04.242 "thread": "nvmf_tgt_poll_group_000", 00:17:04.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:04.242 "listen_address": { 00:17:04.242 "trtype": "TCP", 00:17:04.242 "adrfam": "IPv4", 00:17:04.242 "traddr": "10.0.0.2", 00:17:04.242 "trsvcid": "4420" 00:17:04.242 }, 00:17:04.242 "peer_address": { 00:17:04.242 "trtype": "TCP", 00:17:04.242 "adrfam": "IPv4", 00:17:04.242 "traddr": "10.0.0.1", 00:17:04.242 "trsvcid": "42474" 00:17:04.242 }, 00:17:04.242 "auth": { 00:17:04.242 "state": "completed", 00:17:04.242 "digest": "sha384", 00:17:04.242 "dhgroup": "null" 00:17:04.242 } 00:17:04.242 } 00:17:04.242 ]' 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.242 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.502 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.502 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.502 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.502 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.502 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.762 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:04.762 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:05.394 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.394 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:05.394 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.394 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.394 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.394 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.394 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.395 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.395 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.684 00:17:05.684 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.684 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.684 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.947 { 00:17:05.947 "cntlid": 51, 00:17:05.947 "qid": 0, 00:17:05.947 "state": "enabled", 00:17:05.947 "thread": "nvmf_tgt_poll_group_000", 00:17:05.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:05.947 "listen_address": { 00:17:05.947 "trtype": "TCP", 00:17:05.947 "adrfam": "IPv4", 00:17:05.947 "traddr": "10.0.0.2", 00:17:05.947 "trsvcid": "4420" 00:17:05.947 }, 00:17:05.947 "peer_address": { 00:17:05.947 "trtype": "TCP", 00:17:05.947 "adrfam": "IPv4", 00:17:05.947 "traddr": "10.0.0.1", 00:17:05.947 "trsvcid": "42512" 00:17:05.947 }, 00:17:05.947 "auth": { 00:17:05.947 "state": "completed", 00:17:05.947 "digest": "sha384", 00:17:05.947 "dhgroup": "null" 00:17:05.947 } 00:17:05.947 } 00:17:05.947 ]' 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.947 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.208 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:06.208 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:06.779 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.779 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.779 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.779 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.779 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.779 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.779 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.779 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.039 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.299 00:17:07.299 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.300 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.300 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.560 { 00:17:07.560 "cntlid": 53, 00:17:07.560 "qid": 0, 00:17:07.560 "state": "enabled", 00:17:07.560 "thread": "nvmf_tgt_poll_group_000", 00:17:07.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:07.560 "listen_address": { 00:17:07.560 "trtype": "TCP", 00:17:07.560 "adrfam": "IPv4", 00:17:07.560 "traddr": "10.0.0.2", 00:17:07.560 "trsvcid": "4420" 00:17:07.560 }, 00:17:07.560 "peer_address": { 00:17:07.560 "trtype": "TCP", 00:17:07.560 "adrfam": "IPv4", 00:17:07.560 "traddr": "10.0.0.1", 00:17:07.560 "trsvcid": "42536" 00:17:07.560 }, 00:17:07.560 "auth": { 00:17:07.560 "state": "completed", 00:17:07.560 "digest": "sha384", 00:17:07.560 "dhgroup": "null" 00:17:07.560 } 00:17:07.560 } 00:17:07.560 ]' 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.560 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.820 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:07.820 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:08.391 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.652 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.912 00:17:08.912 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.912 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.912 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.172 { 00:17:09.172 "cntlid": 55, 00:17:09.172 "qid": 0, 00:17:09.172 "state": "enabled", 00:17:09.172 "thread": "nvmf_tgt_poll_group_000", 00:17:09.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:09.172 "listen_address": { 00:17:09.172 "trtype": "TCP", 00:17:09.172 "adrfam": "IPv4", 00:17:09.172 "traddr": "10.0.0.2", 00:17:09.172 "trsvcid": "4420" 00:17:09.172 }, 00:17:09.172 "peer_address": { 00:17:09.172 "trtype": "TCP", 00:17:09.172 "adrfam": "IPv4", 00:17:09.172 "traddr": "10.0.0.1", 00:17:09.172 "trsvcid": "57934" 00:17:09.172 }, 00:17:09.172 "auth": { 00:17:09.172 "state": "completed", 00:17:09.172 "digest": "sha384", 00:17:09.172 "dhgroup": "null" 00:17:09.172 } 00:17:09.172 } 00:17:09.172 ]' 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.172 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.173 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.173 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.173 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.433 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:09.433 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:10.003 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.003 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.003 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.003 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.264 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.524 00:17:10.524 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.524 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.524 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.785 { 00:17:10.785 "cntlid": 57, 00:17:10.785 "qid": 0, 00:17:10.785 "state": "enabled", 00:17:10.785 "thread": "nvmf_tgt_poll_group_000", 00:17:10.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:10.785 "listen_address": { 00:17:10.785 "trtype": "TCP", 00:17:10.785 "adrfam": "IPv4", 00:17:10.785 "traddr": "10.0.0.2", 00:17:10.785 "trsvcid": "4420" 00:17:10.785 }, 00:17:10.785 "peer_address": { 00:17:10.785 "trtype": "TCP", 00:17:10.785 "adrfam": "IPv4", 00:17:10.785 "traddr": "10.0.0.1", 00:17:10.785 "trsvcid": "57960" 00:17:10.785 }, 00:17:10.785 "auth": { 00:17:10.785 "state": "completed", 00:17:10.785 "digest": "sha384", 00:17:10.785 "dhgroup": "ffdhe2048" 00:17:10.785 } 00:17:10.785 } 00:17:10.785 ]' 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.785 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.045 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:11.045 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:11.615 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.615 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.615 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.615 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.615 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.615 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.615 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.615 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.876 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.137 00:17:12.137 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.137 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.137 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.398 { 00:17:12.398 "cntlid": 59, 00:17:12.398 "qid": 0, 00:17:12.398 "state": "enabled", 00:17:12.398 "thread": "nvmf_tgt_poll_group_000", 00:17:12.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:12.398 "listen_address": { 00:17:12.398 "trtype": "TCP", 00:17:12.398 "adrfam": "IPv4", 00:17:12.398 "traddr": "10.0.0.2", 00:17:12.398 "trsvcid": "4420" 00:17:12.398 }, 00:17:12.398 "peer_address": { 00:17:12.398 "trtype": "TCP", 00:17:12.398 "adrfam": "IPv4", 00:17:12.398 "traddr": "10.0.0.1", 00:17:12.398 "trsvcid": "57990" 00:17:12.398 }, 00:17:12.398 "auth": { 00:17:12.398 "state": "completed", 00:17:12.398 "digest": "sha384", 00:17:12.398 "dhgroup": "ffdhe2048" 00:17:12.398 } 00:17:12.398 } 00:17:12.398 ]' 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.398 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.398 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.398 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.398 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.398 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.658 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:12.658 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:13.229 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.229 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:13.229 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.229 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.229 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.229 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.229 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.229 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.489 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.750 00:17:13.750 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.750 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.750 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.010 { 00:17:14.010 "cntlid": 61, 00:17:14.010 "qid": 0, 00:17:14.010 "state": "enabled", 00:17:14.010 "thread": "nvmf_tgt_poll_group_000", 00:17:14.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:14.010 "listen_address": { 00:17:14.010 "trtype": "TCP", 00:17:14.010 "adrfam": "IPv4", 00:17:14.010 "traddr": "10.0.0.2", 00:17:14.010 "trsvcid": "4420" 00:17:14.010 }, 00:17:14.010 "peer_address": { 00:17:14.010 "trtype": "TCP", 00:17:14.010 "adrfam": "IPv4", 00:17:14.010 "traddr": "10.0.0.1", 00:17:14.010 "trsvcid": "58012" 00:17:14.010 }, 00:17:14.010 "auth": { 00:17:14.010 "state": "completed", 00:17:14.010 "digest": "sha384", 00:17:14.010 "dhgroup": "ffdhe2048" 00:17:14.010 } 00:17:14.010 } 00:17:14.010 ]' 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.010 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.270 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:14.270 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:14.840 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.840 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.840 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.840 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.840 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.840 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.840 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.840 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.101 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.361 00:17:15.361 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.361 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.361 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.622 { 00:17:15.622 "cntlid": 63, 00:17:15.622 "qid": 0, 00:17:15.622 "state": "enabled", 00:17:15.622 "thread": "nvmf_tgt_poll_group_000", 00:17:15.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:15.622 "listen_address": { 00:17:15.622 "trtype": "TCP", 00:17:15.622 "adrfam": "IPv4", 00:17:15.622 "traddr": "10.0.0.2", 00:17:15.622 "trsvcid": "4420" 00:17:15.622 }, 00:17:15.622 "peer_address": { 00:17:15.622 "trtype": "TCP", 00:17:15.622 "adrfam": "IPv4", 00:17:15.622 "traddr": "10.0.0.1", 00:17:15.622 "trsvcid": "58054" 00:17:15.622 }, 00:17:15.622 "auth": { 00:17:15.622 "state": "completed", 00:17:15.622 "digest": "sha384", 00:17:15.622 "dhgroup": "ffdhe2048" 00:17:15.622 } 00:17:15.622 } 00:17:15.622 ]' 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.622 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.882 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:15.882 11:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.452 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.713 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.973 00:17:16.973 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.973 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.973 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.234 { 00:17:17.234 "cntlid": 65, 00:17:17.234 "qid": 0, 00:17:17.234 "state": "enabled", 00:17:17.234 "thread": "nvmf_tgt_poll_group_000", 00:17:17.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:17.234 "listen_address": { 00:17:17.234 "trtype": "TCP", 00:17:17.234 "adrfam": "IPv4", 00:17:17.234 "traddr": "10.0.0.2", 00:17:17.234 "trsvcid": "4420" 00:17:17.234 }, 00:17:17.234 "peer_address": { 00:17:17.234 "trtype": "TCP", 00:17:17.234 "adrfam": "IPv4", 00:17:17.234 "traddr": "10.0.0.1", 00:17:17.234 "trsvcid": "58078" 00:17:17.234 }, 00:17:17.234 "auth": { 00:17:17.234 "state": "completed", 00:17:17.234 "digest": "sha384", 00:17:17.234 "dhgroup": "ffdhe3072" 00:17:17.234 } 00:17:17.234 } 00:17:17.234 ]' 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.234 11:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.494 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:17.494 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:18.064 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.064 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:18.064 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.064 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.064 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.064 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.064 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.064 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.324 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.325 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.585 00:17:18.585 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.585 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.585 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.845 { 00:17:18.845 "cntlid": 67, 00:17:18.845 "qid": 0, 00:17:18.845 "state": "enabled", 00:17:18.845 "thread": "nvmf_tgt_poll_group_000", 00:17:18.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:18.845 "listen_address": { 00:17:18.845 "trtype": "TCP", 00:17:18.845 "adrfam": "IPv4", 00:17:18.845 "traddr": "10.0.0.2", 00:17:18.845 "trsvcid": "4420" 00:17:18.845 }, 00:17:18.845 "peer_address": { 00:17:18.845 "trtype": "TCP", 00:17:18.845 "adrfam": "IPv4", 00:17:18.845 "traddr": "10.0.0.1", 00:17:18.845 "trsvcid": "55746" 00:17:18.845 }, 00:17:18.845 "auth": { 00:17:18.845 "state": "completed", 00:17:18.845 "digest": "sha384", 00:17:18.845 "dhgroup": "ffdhe3072" 00:17:18.845 } 00:17:18.845 } 00:17:18.845 ]' 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.845 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.106 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:19.106 11:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:19.676 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.676 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:19.676 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.676 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.676 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.676 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.676 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.676 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.937 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.197 00:17:20.197 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.197 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.197 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.458 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.458 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.458 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.458 11:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.458 { 00:17:20.458 "cntlid": 69, 00:17:20.458 "qid": 0, 00:17:20.458 "state": "enabled", 00:17:20.458 "thread": "nvmf_tgt_poll_group_000", 00:17:20.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:20.458 "listen_address": { 00:17:20.458 "trtype": "TCP", 00:17:20.458 "adrfam": "IPv4", 00:17:20.458 "traddr": "10.0.0.2", 00:17:20.458 "trsvcid": "4420" 00:17:20.458 }, 00:17:20.458 "peer_address": { 00:17:20.458 "trtype": "TCP", 00:17:20.458 "adrfam": "IPv4", 00:17:20.458 "traddr": "10.0.0.1", 00:17:20.458 "trsvcid": "55772" 00:17:20.458 }, 00:17:20.458 "auth": { 00:17:20.458 "state": "completed", 00:17:20.458 "digest": "sha384", 00:17:20.458 "dhgroup": "ffdhe3072" 00:17:20.458 } 00:17:20.458 } 00:17:20.458 ]' 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.458 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.719 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:20.719 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:21.289 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.289 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.289 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.289 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.289 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.289 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.289 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.289 11:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.548 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.807 00:17:21.807 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.807 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.807 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.067 { 00:17:22.067 "cntlid": 71, 00:17:22.067 "qid": 0, 00:17:22.067 "state": "enabled", 00:17:22.067 "thread": "nvmf_tgt_poll_group_000", 00:17:22.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:22.067 "listen_address": { 00:17:22.067 "trtype": "TCP", 00:17:22.067 "adrfam": "IPv4", 00:17:22.067 "traddr": "10.0.0.2", 00:17:22.067 "trsvcid": "4420" 00:17:22.067 }, 00:17:22.067 "peer_address": { 00:17:22.067 "trtype": "TCP", 00:17:22.067 "adrfam": "IPv4", 00:17:22.067 "traddr": "10.0.0.1", 00:17:22.067 "trsvcid": "55812" 00:17:22.067 }, 00:17:22.067 "auth": { 00:17:22.067 "state": "completed", 00:17:22.067 "digest": "sha384", 00:17:22.067 "dhgroup": "ffdhe3072" 00:17:22.067 } 00:17:22.067 } 00:17:22.067 ]' 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.067 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.327 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:22.327 11:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.895 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.156 11:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.419 00:17:23.419 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.419 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.419 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.681 { 00:17:23.681 "cntlid": 73, 00:17:23.681 "qid": 0, 00:17:23.681 "state": "enabled", 00:17:23.681 "thread": "nvmf_tgt_poll_group_000", 00:17:23.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:23.681 "listen_address": { 00:17:23.681 "trtype": "TCP", 00:17:23.681 "adrfam": "IPv4", 00:17:23.681 "traddr": "10.0.0.2", 00:17:23.681 "trsvcid": "4420" 00:17:23.681 }, 00:17:23.681 "peer_address": { 00:17:23.681 "trtype": "TCP", 00:17:23.681 "adrfam": "IPv4", 00:17:23.681 "traddr": "10.0.0.1", 00:17:23.681 "trsvcid": "55850" 00:17:23.681 }, 00:17:23.681 "auth": { 00:17:23.681 "state": "completed", 00:17:23.681 "digest": "sha384", 00:17:23.681 "dhgroup": "ffdhe4096" 00:17:23.681 } 00:17:23.681 } 00:17:23.681 ]' 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.681 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.942 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.942 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.942 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.942 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:23.942 11:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.883 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.884 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.884 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.884 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.143 00:17:25.143 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.143 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.143 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.403 { 00:17:25.403 "cntlid": 75, 00:17:25.403 "qid": 0, 00:17:25.403 "state": "enabled", 00:17:25.403 "thread": "nvmf_tgt_poll_group_000", 00:17:25.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:25.403 "listen_address": { 00:17:25.403 "trtype": "TCP", 00:17:25.403 "adrfam": "IPv4", 00:17:25.403 "traddr": "10.0.0.2", 00:17:25.403 "trsvcid": "4420" 00:17:25.403 }, 00:17:25.403 "peer_address": { 00:17:25.403 "trtype": "TCP", 00:17:25.403 "adrfam": "IPv4", 00:17:25.403 "traddr": "10.0.0.1", 00:17:25.403 "trsvcid": "55876" 00:17:25.403 }, 00:17:25.403 "auth": { 00:17:25.403 "state": "completed", 00:17:25.403 "digest": "sha384", 00:17:25.403 "dhgroup": "ffdhe4096" 00:17:25.403 } 00:17:25.403 } 00:17:25.403 ]' 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.403 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.403 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.403 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.403 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.403 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.403 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.664 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:25.664 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:26.235 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.235 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.235 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.235 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.495 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.495 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.495 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.495 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.495 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.755 00:17:26.755 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.755 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.755 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.016 { 00:17:27.016 "cntlid": 77, 00:17:27.016 "qid": 0, 00:17:27.016 "state": "enabled", 00:17:27.016 "thread": "nvmf_tgt_poll_group_000", 00:17:27.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:27.016 "listen_address": { 00:17:27.016 "trtype": "TCP", 00:17:27.016 "adrfam": "IPv4", 00:17:27.016 "traddr": "10.0.0.2", 00:17:27.016 "trsvcid": "4420" 00:17:27.016 }, 00:17:27.016 "peer_address": { 00:17:27.016 "trtype": "TCP", 00:17:27.016 "adrfam": "IPv4", 00:17:27.016 "traddr": "10.0.0.1", 00:17:27.016 "trsvcid": "55908" 00:17:27.016 }, 00:17:27.016 "auth": { 00:17:27.016 "state": "completed", 00:17:27.016 "digest": "sha384", 00:17:27.016 "dhgroup": "ffdhe4096" 00:17:27.016 } 00:17:27.016 } 00:17:27.016 ]' 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.016 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.277 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.277 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.277 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.277 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:27.277 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:27.852 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.112 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.112 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.112 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.112 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.112 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.112 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.112 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.112 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.113 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.373 00:17:28.373 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.373 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.373 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.633 { 00:17:28.633 "cntlid": 79, 00:17:28.633 "qid": 0, 00:17:28.633 "state": "enabled", 00:17:28.633 "thread": "nvmf_tgt_poll_group_000", 00:17:28.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:28.633 "listen_address": { 00:17:28.633 "trtype": "TCP", 00:17:28.633 "adrfam": "IPv4", 00:17:28.633 "traddr": "10.0.0.2", 00:17:28.633 "trsvcid": "4420" 00:17:28.633 }, 00:17:28.633 "peer_address": { 00:17:28.633 "trtype": "TCP", 00:17:28.633 "adrfam": "IPv4", 00:17:28.633 "traddr": "10.0.0.1", 00:17:28.633 "trsvcid": "47654" 00:17:28.633 }, 00:17:28.633 "auth": { 00:17:28.633 "state": "completed", 00:17:28.633 "digest": "sha384", 00:17:28.633 "dhgroup": "ffdhe4096" 00:17:28.633 } 00:17:28.633 } 00:17:28.633 ]' 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.633 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.894 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:28.894 11:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:29.465 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.726 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.986 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.247 { 00:17:30.247 "cntlid": 81, 00:17:30.247 "qid": 0, 00:17:30.247 "state": "enabled", 00:17:30.247 "thread": "nvmf_tgt_poll_group_000", 00:17:30.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:30.247 "listen_address": { 00:17:30.247 "trtype": "TCP", 00:17:30.247 "adrfam": "IPv4", 00:17:30.247 "traddr": "10.0.0.2", 00:17:30.247 "trsvcid": "4420" 00:17:30.247 }, 00:17:30.247 "peer_address": { 00:17:30.247 "trtype": "TCP", 00:17:30.247 "adrfam": "IPv4", 00:17:30.247 "traddr": "10.0.0.1", 00:17:30.247 "trsvcid": "47672" 00:17:30.247 }, 00:17:30.247 "auth": { 00:17:30.247 "state": "completed", 00:17:30.247 "digest": "sha384", 00:17:30.247 "dhgroup": "ffdhe6144" 00:17:30.247 } 00:17:30.247 } 00:17:30.247 ]' 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.247 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.508 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.508 11:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.508 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.508 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.508 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.508 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:30.508 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:31.451 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.451 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.451 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.451 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.451 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.451 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.451 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.451 11:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.451 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.712 00:17:31.712 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.712 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.712 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.972 { 00:17:31.972 "cntlid": 83, 00:17:31.972 "qid": 0, 00:17:31.972 "state": "enabled", 00:17:31.972 "thread": "nvmf_tgt_poll_group_000", 00:17:31.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:31.972 "listen_address": { 00:17:31.972 "trtype": "TCP", 00:17:31.972 "adrfam": "IPv4", 00:17:31.972 "traddr": "10.0.0.2", 00:17:31.972 "trsvcid": "4420" 00:17:31.972 }, 00:17:31.972 "peer_address": { 00:17:31.972 "trtype": "TCP", 00:17:31.972 "adrfam": "IPv4", 00:17:31.972 "traddr": "10.0.0.1", 00:17:31.972 "trsvcid": "47702" 00:17:31.972 }, 00:17:31.972 "auth": { 00:17:31.972 "state": "completed", 00:17:31.972 "digest": "sha384", 00:17:31.972 "dhgroup": "ffdhe6144" 00:17:31.972 } 00:17:31.972 } 00:17:31.972 ]' 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.972 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.233 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.233 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.233 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.233 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:32.233 11:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.175 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.176 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.176 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.176 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.176 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.176 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.176 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.176 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.176 11:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.435 00:17:33.435 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.435 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.435 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.695 { 00:17:33.695 "cntlid": 85, 00:17:33.695 "qid": 0, 00:17:33.695 "state": "enabled", 00:17:33.695 "thread": "nvmf_tgt_poll_group_000", 00:17:33.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:33.695 "listen_address": { 00:17:33.695 "trtype": "TCP", 00:17:33.695 "adrfam": "IPv4", 00:17:33.695 "traddr": "10.0.0.2", 00:17:33.695 "trsvcid": "4420" 00:17:33.695 }, 00:17:33.695 "peer_address": { 00:17:33.695 "trtype": "TCP", 00:17:33.695 "adrfam": "IPv4", 00:17:33.695 "traddr": "10.0.0.1", 00:17:33.695 "trsvcid": "47714" 00:17:33.695 }, 00:17:33.695 "auth": { 00:17:33.695 "state": "completed", 00:17:33.695 "digest": "sha384", 00:17:33.695 "dhgroup": "ffdhe6144" 00:17:33.695 } 00:17:33.695 } 00:17:33.695 ]' 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.695 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.956 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.956 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.956 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.956 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:33.956 11:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.897 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.156 00:17:35.156 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.156 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.156 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.416 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.416 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.416 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.416 11:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.416 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.416 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.416 { 00:17:35.416 "cntlid": 87, 00:17:35.416 "qid": 0, 00:17:35.416 "state": "enabled", 00:17:35.416 "thread": "nvmf_tgt_poll_group_000", 00:17:35.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:35.416 "listen_address": { 00:17:35.416 "trtype": "TCP", 00:17:35.416 "adrfam": "IPv4", 00:17:35.416 "traddr": "10.0.0.2", 00:17:35.416 "trsvcid": "4420" 00:17:35.416 }, 00:17:35.416 "peer_address": { 00:17:35.416 "trtype": "TCP", 00:17:35.416 "adrfam": "IPv4", 00:17:35.416 "traddr": "10.0.0.1", 00:17:35.416 "trsvcid": "47746" 00:17:35.416 }, 00:17:35.416 "auth": { 00:17:35.416 "state": "completed", 00:17:35.416 "digest": "sha384", 00:17:35.416 "dhgroup": "ffdhe6144" 00:17:35.416 } 00:17:35.416 } 00:17:35.416 ]' 00:17:35.416 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.416 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.416 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.416 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.416 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.676 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.676 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.676 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.676 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:35.676 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:36.617 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.617 11:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.617 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.617 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.617 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.617 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.617 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.617 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.617 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.618 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.187 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.187 { 00:17:37.187 "cntlid": 89, 00:17:37.187 "qid": 0, 00:17:37.187 "state": "enabled", 00:17:37.187 "thread": "nvmf_tgt_poll_group_000", 00:17:37.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:37.187 "listen_address": { 00:17:37.187 "trtype": "TCP", 00:17:37.187 "adrfam": "IPv4", 00:17:37.187 "traddr": "10.0.0.2", 00:17:37.187 "trsvcid": "4420" 00:17:37.187 }, 00:17:37.187 "peer_address": { 00:17:37.187 "trtype": "TCP", 00:17:37.187 "adrfam": "IPv4", 00:17:37.187 "traddr": "10.0.0.1", 00:17:37.187 "trsvcid": "47768" 00:17:37.187 }, 00:17:37.187 "auth": { 00:17:37.187 "state": "completed", 00:17:37.187 "digest": "sha384", 00:17:37.187 "dhgroup": "ffdhe8192" 00:17:37.187 } 00:17:37.187 } 00:17:37.187 ]' 00:17:37.187 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.448 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.448 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.448 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.448 11:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.448 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.448 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.448 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.708 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:37.708 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:38.277 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.277 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:38.277 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.277 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.277 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.277 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.277 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.277 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.537 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.538 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.538 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.108 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.108 { 00:17:39.108 "cntlid": 91, 00:17:39.108 "qid": 0, 00:17:39.108 "state": "enabled", 00:17:39.108 "thread": "nvmf_tgt_poll_group_000", 00:17:39.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:39.108 "listen_address": { 00:17:39.108 "trtype": "TCP", 00:17:39.108 "adrfam": "IPv4", 00:17:39.108 "traddr": "10.0.0.2", 00:17:39.108 "trsvcid": "4420" 00:17:39.108 }, 00:17:39.108 "peer_address": { 00:17:39.108 "trtype": "TCP", 00:17:39.108 "adrfam": "IPv4", 00:17:39.108 "traddr": "10.0.0.1", 00:17:39.108 "trsvcid": "47788" 00:17:39.108 }, 00:17:39.108 "auth": { 00:17:39.108 "state": "completed", 00:17:39.108 "digest": "sha384", 00:17:39.108 "dhgroup": "ffdhe8192" 00:17:39.108 } 00:17:39.108 } 00:17:39.108 ]' 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.108 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.368 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.368 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.368 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.368 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.368 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.368 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:39.368 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.309 11:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.879 00:17:40.879 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.879 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.879 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.879 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.879 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.879 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.879 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.139 { 00:17:41.139 "cntlid": 93, 00:17:41.139 "qid": 0, 00:17:41.139 "state": "enabled", 00:17:41.139 "thread": "nvmf_tgt_poll_group_000", 00:17:41.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:41.139 "listen_address": { 00:17:41.139 "trtype": "TCP", 00:17:41.139 "adrfam": "IPv4", 00:17:41.139 "traddr": "10.0.0.2", 00:17:41.139 "trsvcid": "4420" 00:17:41.139 }, 00:17:41.139 "peer_address": { 00:17:41.139 "trtype": "TCP", 00:17:41.139 "adrfam": "IPv4", 00:17:41.139 "traddr": "10.0.0.1", 00:17:41.139 "trsvcid": "47808" 00:17:41.139 }, 00:17:41.139 "auth": { 00:17:41.139 "state": "completed", 00:17:41.139 "digest": "sha384", 00:17:41.139 "dhgroup": "ffdhe8192" 00:17:41.139 } 00:17:41.139 } 00:17:41.139 ]' 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.139 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.399 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:41.399 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:41.968 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.968 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.968 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.968 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.968 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.968 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.968 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.968 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.228 11:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.799 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.799 { 00:17:42.799 "cntlid": 95, 00:17:42.799 "qid": 0, 00:17:42.799 "state": "enabled", 00:17:42.799 "thread": "nvmf_tgt_poll_group_000", 00:17:42.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:42.799 "listen_address": { 00:17:42.799 "trtype": "TCP", 00:17:42.799 "adrfam": "IPv4", 00:17:42.799 "traddr": "10.0.0.2", 00:17:42.799 "trsvcid": "4420" 00:17:42.799 }, 00:17:42.799 "peer_address": { 00:17:42.799 "trtype": "TCP", 00:17:42.799 "adrfam": "IPv4", 00:17:42.799 "traddr": "10.0.0.1", 00:17:42.799 "trsvcid": "47842" 00:17:42.799 }, 00:17:42.799 "auth": { 00:17:42.799 "state": "completed", 00:17:42.799 "digest": "sha384", 00:17:42.799 "dhgroup": "ffdhe8192" 00:17:42.799 } 00:17:42.799 } 00:17:42.799 ]' 00:17:42.799 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.066 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.066 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.066 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.066 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.066 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.066 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.066 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.349 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:43.349 11:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.972 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.973 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.233 00:17:44.233 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.233 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.233 11:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.493 { 00:17:44.493 "cntlid": 97, 00:17:44.493 "qid": 0, 00:17:44.493 "state": "enabled", 00:17:44.493 "thread": "nvmf_tgt_poll_group_000", 00:17:44.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:44.493 "listen_address": { 00:17:44.493 "trtype": "TCP", 00:17:44.493 "adrfam": "IPv4", 00:17:44.493 "traddr": "10.0.0.2", 00:17:44.493 "trsvcid": "4420" 00:17:44.493 }, 00:17:44.493 "peer_address": { 00:17:44.493 "trtype": "TCP", 00:17:44.493 "adrfam": "IPv4", 00:17:44.493 "traddr": "10.0.0.1", 00:17:44.493 "trsvcid": "47886" 00:17:44.493 }, 00:17:44.493 "auth": { 00:17:44.493 "state": "completed", 00:17:44.493 "digest": "sha512", 00:17:44.493 "dhgroup": "null" 00:17:44.493 } 00:17:44.493 } 00:17:44.493 ]' 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:44.493 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.754 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.754 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.754 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.754 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:44.754 11:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:45.324 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.585 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.845 00:17:45.845 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.845 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.845 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.105 { 00:17:46.105 "cntlid": 99, 00:17:46.105 "qid": 0, 00:17:46.105 "state": "enabled", 00:17:46.105 "thread": "nvmf_tgt_poll_group_000", 00:17:46.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:46.105 "listen_address": { 00:17:46.105 "trtype": "TCP", 00:17:46.105 "adrfam": "IPv4", 00:17:46.105 "traddr": "10.0.0.2", 00:17:46.105 "trsvcid": "4420" 00:17:46.105 }, 00:17:46.105 "peer_address": { 00:17:46.105 "trtype": "TCP", 00:17:46.105 "adrfam": "IPv4", 00:17:46.105 "traddr": "10.0.0.1", 00:17:46.105 "trsvcid": "47920" 00:17:46.105 }, 00:17:46.105 "auth": { 00:17:46.105 "state": "completed", 00:17:46.105 "digest": "sha512", 00:17:46.105 "dhgroup": "null" 00:17:46.105 } 00:17:46.105 } 00:17:46.105 ]' 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:46.105 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.366 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.366 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.366 11:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.366 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:46.366 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.307 11:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.569 00:17:47.569 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.569 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.569 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.829 { 00:17:47.829 "cntlid": 101, 00:17:47.829 "qid": 0, 00:17:47.829 "state": "enabled", 00:17:47.829 "thread": "nvmf_tgt_poll_group_000", 00:17:47.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:47.829 "listen_address": { 00:17:47.829 "trtype": "TCP", 00:17:47.829 "adrfam": "IPv4", 00:17:47.829 "traddr": "10.0.0.2", 00:17:47.829 "trsvcid": "4420" 00:17:47.829 }, 00:17:47.829 "peer_address": { 00:17:47.829 "trtype": "TCP", 00:17:47.829 "adrfam": "IPv4", 00:17:47.829 "traddr": "10.0.0.1", 00:17:47.829 "trsvcid": "47934" 00:17:47.829 }, 00:17:47.829 "auth": { 00:17:47.829 "state": "completed", 00:17:47.829 "digest": "sha512", 00:17:47.829 "dhgroup": "null" 00:17:47.829 } 00:17:47.829 } 00:17:47.829 ]' 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.829 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.090 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:48.090 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:48.660 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.660 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.660 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.660 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.660 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.660 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.660 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.660 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.920 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.181 00:17:49.181 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.181 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.181 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.442 { 00:17:49.442 "cntlid": 103, 00:17:49.442 "qid": 0, 00:17:49.442 "state": "enabled", 00:17:49.442 "thread": "nvmf_tgt_poll_group_000", 00:17:49.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:49.442 "listen_address": { 00:17:49.442 "trtype": "TCP", 00:17:49.442 "adrfam": "IPv4", 00:17:49.442 "traddr": "10.0.0.2", 00:17:49.442 "trsvcid": "4420" 00:17:49.442 }, 00:17:49.442 "peer_address": { 00:17:49.442 "trtype": "TCP", 00:17:49.442 "adrfam": "IPv4", 00:17:49.442 "traddr": "10.0.0.1", 00:17:49.442 "trsvcid": "45368" 00:17:49.442 }, 00:17:49.442 "auth": { 00:17:49.442 "state": "completed", 00:17:49.442 "digest": "sha512", 00:17:49.442 "dhgroup": "null" 00:17:49.442 } 00:17:49.442 } 00:17:49.442 ]' 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.442 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.442 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.442 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.442 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.442 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.442 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.702 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:49.702 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.273 11:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.533 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:50.533 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.533 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.533 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.533 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.533 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.534 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.534 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.534 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.534 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.534 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.534 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.534 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.794 00:17:50.794 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.794 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.794 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.054 { 00:17:51.054 "cntlid": 105, 00:17:51.054 "qid": 0, 00:17:51.054 "state": "enabled", 00:17:51.054 "thread": "nvmf_tgt_poll_group_000", 00:17:51.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:51.054 "listen_address": { 00:17:51.054 "trtype": "TCP", 00:17:51.054 "adrfam": "IPv4", 00:17:51.054 "traddr": "10.0.0.2", 00:17:51.054 "trsvcid": "4420" 00:17:51.054 }, 00:17:51.054 "peer_address": { 00:17:51.054 "trtype": "TCP", 00:17:51.054 "adrfam": "IPv4", 00:17:51.054 "traddr": "10.0.0.1", 00:17:51.054 "trsvcid": "45386" 00:17:51.054 }, 00:17:51.054 "auth": { 00:17:51.054 "state": "completed", 00:17:51.054 "digest": "sha512", 00:17:51.054 "dhgroup": "ffdhe2048" 00:17:51.054 } 00:17:51.054 } 00:17:51.054 ]' 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.054 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.314 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:51.314 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:51.884 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.884 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.884 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.884 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.884 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.884 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.884 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.884 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.145 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.406 00:17:52.406 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.406 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.406 11:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.406 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.406 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.406 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.406 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.406 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.406 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.406 { 00:17:52.406 "cntlid": 107, 00:17:52.406 "qid": 0, 00:17:52.406 "state": "enabled", 00:17:52.406 "thread": "nvmf_tgt_poll_group_000", 00:17:52.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:52.406 "listen_address": { 00:17:52.406 "trtype": "TCP", 00:17:52.406 "adrfam": "IPv4", 00:17:52.406 "traddr": "10.0.0.2", 00:17:52.406 "trsvcid": "4420" 00:17:52.406 }, 00:17:52.406 "peer_address": { 00:17:52.406 "trtype": "TCP", 00:17:52.406 "adrfam": "IPv4", 00:17:52.406 "traddr": "10.0.0.1", 00:17:52.406 "trsvcid": "45412" 00:17:52.406 }, 00:17:52.406 "auth": { 00:17:52.406 "state": "completed", 00:17:52.406 "digest": "sha512", 00:17:52.406 "dhgroup": "ffdhe2048" 00:17:52.406 } 00:17:52.406 } 00:17:52.406 ]' 00:17:52.406 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.668 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.668 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.668 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.668 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.668 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.668 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.668 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.929 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:52.929 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:53.500 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.500 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.500 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.500 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.500 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.500 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.501 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.501 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.761 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.021 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.021 { 00:17:54.021 "cntlid": 109, 00:17:54.021 "qid": 0, 00:17:54.021 "state": "enabled", 00:17:54.021 "thread": "nvmf_tgt_poll_group_000", 00:17:54.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:54.021 "listen_address": { 00:17:54.021 "trtype": "TCP", 00:17:54.021 "adrfam": "IPv4", 00:17:54.021 "traddr": "10.0.0.2", 00:17:54.021 "trsvcid": "4420" 00:17:54.021 }, 00:17:54.021 "peer_address": { 00:17:54.021 "trtype": "TCP", 00:17:54.021 "adrfam": "IPv4", 00:17:54.021 "traddr": "10.0.0.1", 00:17:54.021 "trsvcid": "45444" 00:17:54.021 }, 00:17:54.021 "auth": { 00:17:54.021 "state": "completed", 00:17:54.021 "digest": "sha512", 00:17:54.021 "dhgroup": "ffdhe2048" 00:17:54.021 } 00:17:54.021 } 00:17:54.021 ]' 00:17:54.021 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.281 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.281 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.282 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.282 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.282 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.282 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.282 11:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.542 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:54.542 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:17:55.114 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.114 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.114 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.114 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.114 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.114 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.114 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.114 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.374 11:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.374 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.635 { 00:17:55.635 "cntlid": 111, 00:17:55.635 "qid": 0, 00:17:55.635 "state": "enabled", 00:17:55.635 "thread": "nvmf_tgt_poll_group_000", 00:17:55.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:55.635 "listen_address": { 00:17:55.635 "trtype": "TCP", 00:17:55.635 "adrfam": "IPv4", 00:17:55.635 "traddr": "10.0.0.2", 00:17:55.635 "trsvcid": "4420" 00:17:55.635 }, 00:17:55.635 "peer_address": { 00:17:55.635 "trtype": "TCP", 00:17:55.635 "adrfam": "IPv4", 00:17:55.635 "traddr": "10.0.0.1", 00:17:55.635 "trsvcid": "45470" 00:17:55.635 }, 00:17:55.635 "auth": { 00:17:55.635 "state": "completed", 00:17:55.635 "digest": "sha512", 00:17:55.635 "dhgroup": "ffdhe2048" 00:17:55.635 } 00:17:55.635 } 00:17:55.635 ]' 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.635 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.896 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.896 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.896 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.896 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.896 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.156 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:56.156 11:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.727 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.988 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.989 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.250 00:17:57.250 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.250 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.250 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.511 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.511 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.511 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.511 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.511 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.511 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.511 { 00:17:57.511 "cntlid": 113, 00:17:57.511 "qid": 0, 00:17:57.511 "state": "enabled", 00:17:57.511 "thread": "nvmf_tgt_poll_group_000", 00:17:57.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.511 "listen_address": { 00:17:57.511 "trtype": "TCP", 00:17:57.511 "adrfam": "IPv4", 00:17:57.511 "traddr": "10.0.0.2", 00:17:57.511 "trsvcid": "4420" 00:17:57.511 }, 00:17:57.511 "peer_address": { 00:17:57.511 "trtype": "TCP", 00:17:57.511 "adrfam": "IPv4", 00:17:57.511 "traddr": "10.0.0.1", 00:17:57.511 "trsvcid": "45494" 00:17:57.511 }, 00:17:57.511 "auth": { 00:17:57.511 "state": "completed", 00:17:57.511 "digest": "sha512", 00:17:57.511 "dhgroup": "ffdhe3072" 00:17:57.511 } 00:17:57.511 } 00:17:57.511 ]' 00:17:57.511 11:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.511 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.511 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.511 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.511 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.511 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.511 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.511 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.772 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:57.772 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:17:58.394 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.394 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.394 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.394 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.394 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.394 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.394 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.394 11:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.655 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.917 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.917 { 00:17:58.917 "cntlid": 115, 00:17:58.917 "qid": 0, 00:17:58.917 "state": "enabled", 00:17:58.917 "thread": "nvmf_tgt_poll_group_000", 00:17:58.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:58.917 "listen_address": { 00:17:58.917 "trtype": "TCP", 00:17:58.917 "adrfam": "IPv4", 00:17:58.917 "traddr": "10.0.0.2", 00:17:58.917 "trsvcid": "4420" 00:17:58.917 }, 00:17:58.917 "peer_address": { 00:17:58.917 "trtype": "TCP", 00:17:58.917 "adrfam": "IPv4", 00:17:58.917 "traddr": "10.0.0.1", 00:17:58.917 "trsvcid": "36402" 00:17:58.917 }, 00:17:58.917 "auth": { 00:17:58.917 "state": "completed", 00:17:58.917 "digest": "sha512", 00:17:58.917 "dhgroup": "ffdhe3072" 00:17:58.917 } 00:17:58.917 } 00:17:58.917 ]' 00:17:58.917 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.178 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.178 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.178 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.178 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.178 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.178 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.178 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.438 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:17:59.438 11:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:18:00.010 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.010 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.010 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.010 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.010 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.010 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.010 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.010 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.271 11:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.532 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.532 { 00:18:00.532 "cntlid": 117, 00:18:00.532 "qid": 0, 00:18:00.532 "state": "enabled", 00:18:00.532 "thread": "nvmf_tgt_poll_group_000", 00:18:00.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:00.532 "listen_address": { 00:18:00.532 "trtype": "TCP", 00:18:00.532 "adrfam": "IPv4", 00:18:00.532 "traddr": "10.0.0.2", 00:18:00.532 "trsvcid": "4420" 00:18:00.532 }, 00:18:00.532 "peer_address": { 00:18:00.532 "trtype": "TCP", 00:18:00.532 "adrfam": "IPv4", 00:18:00.532 "traddr": "10.0.0.1", 00:18:00.532 "trsvcid": "36442" 00:18:00.532 }, 00:18:00.532 "auth": { 00:18:00.532 "state": "completed", 00:18:00.532 "digest": "sha512", 00:18:00.532 "dhgroup": "ffdhe3072" 00:18:00.532 } 00:18:00.532 } 00:18:00.532 ]' 00:18:00.532 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.793 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.793 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.793 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.793 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.793 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.794 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.794 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.054 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:18:01.054 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:18:01.626 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.626 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.626 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.626 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.626 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.627 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.627 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.627 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.887 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.148 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.148 { 00:18:02.148 "cntlid": 119, 00:18:02.148 "qid": 0, 00:18:02.148 "state": "enabled", 00:18:02.148 "thread": "nvmf_tgt_poll_group_000", 00:18:02.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:02.148 "listen_address": { 00:18:02.148 "trtype": "TCP", 00:18:02.148 "adrfam": "IPv4", 00:18:02.148 "traddr": "10.0.0.2", 00:18:02.148 "trsvcid": "4420" 00:18:02.148 }, 00:18:02.148 "peer_address": { 00:18:02.148 "trtype": "TCP", 00:18:02.148 "adrfam": "IPv4", 00:18:02.148 "traddr": "10.0.0.1", 00:18:02.148 "trsvcid": "36452" 00:18:02.148 }, 00:18:02.148 "auth": { 00:18:02.148 "state": "completed", 00:18:02.148 "digest": "sha512", 00:18:02.148 "dhgroup": "ffdhe3072" 00:18:02.148 } 00:18:02.148 } 00:18:02.148 ]' 00:18:02.148 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.408 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.408 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.408 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.408 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.408 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.408 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.408 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.669 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:02.669 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.238 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.498 11:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.759 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.759 { 00:18:03.759 "cntlid": 121, 00:18:03.759 "qid": 0, 00:18:03.759 "state": "enabled", 00:18:03.759 "thread": "nvmf_tgt_poll_group_000", 00:18:03.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:03.759 "listen_address": { 00:18:03.759 "trtype": "TCP", 00:18:03.759 "adrfam": "IPv4", 00:18:03.759 "traddr": "10.0.0.2", 00:18:03.759 "trsvcid": "4420" 00:18:03.759 }, 00:18:03.759 "peer_address": { 00:18:03.759 "trtype": "TCP", 00:18:03.759 "adrfam": "IPv4", 00:18:03.759 "traddr": "10.0.0.1", 00:18:03.759 "trsvcid": "36486" 00:18:03.759 }, 00:18:03.759 "auth": { 00:18:03.759 "state": "completed", 00:18:03.759 "digest": "sha512", 00:18:03.759 "dhgroup": "ffdhe4096" 00:18:03.759 } 00:18:03.759 } 00:18:03.759 ]' 00:18:03.759 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.019 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.019 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.019 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.019 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.019 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.019 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.019 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.279 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:18:04.279 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:18:04.849 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.849 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.849 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.849 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.849 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.849 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.849 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.849 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.109 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.369 00:18:05.369 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.369 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.369 11:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.369 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.369 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.369 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.369 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.369 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.369 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.369 { 00:18:05.369 "cntlid": 123, 00:18:05.369 "qid": 0, 00:18:05.369 "state": "enabled", 00:18:05.369 "thread": "nvmf_tgt_poll_group_000", 00:18:05.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:05.369 "listen_address": { 00:18:05.369 "trtype": "TCP", 00:18:05.369 "adrfam": "IPv4", 00:18:05.369 "traddr": "10.0.0.2", 00:18:05.369 "trsvcid": "4420" 00:18:05.369 }, 00:18:05.369 "peer_address": { 00:18:05.369 "trtype": "TCP", 00:18:05.369 "adrfam": "IPv4", 00:18:05.369 "traddr": "10.0.0.1", 00:18:05.369 "trsvcid": "36518" 00:18:05.369 }, 00:18:05.369 "auth": { 00:18:05.369 "state": "completed", 00:18:05.369 "digest": "sha512", 00:18:05.369 "dhgroup": "ffdhe4096" 00:18:05.369 } 00:18:05.369 } 00:18:05.369 ]' 00:18:05.369 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.629 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.629 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.629 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.629 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.629 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.629 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.629 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.889 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:18:05.889 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:18:06.460 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.460 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.460 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.460 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.460 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.460 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.460 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.460 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.720 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.981 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.981 { 00:18:06.981 "cntlid": 125, 00:18:06.981 "qid": 0, 00:18:06.981 "state": "enabled", 00:18:06.981 "thread": "nvmf_tgt_poll_group_000", 00:18:06.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:06.981 "listen_address": { 00:18:06.981 "trtype": "TCP", 00:18:06.981 "adrfam": "IPv4", 00:18:06.981 "traddr": "10.0.0.2", 00:18:06.981 "trsvcid": "4420" 00:18:06.981 }, 00:18:06.981 "peer_address": { 00:18:06.981 "trtype": "TCP", 00:18:06.981 "adrfam": "IPv4", 00:18:06.981 "traddr": "10.0.0.1", 00:18:06.981 "trsvcid": "36550" 00:18:06.981 }, 00:18:06.981 "auth": { 00:18:06.981 "state": "completed", 00:18:06.981 "digest": "sha512", 00:18:06.981 "dhgroup": "ffdhe4096" 00:18:06.981 } 00:18:06.981 } 00:18:06.981 ]' 00:18:06.981 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.242 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.242 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.242 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.242 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.242 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.242 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.242 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.505 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:18:07.506 11:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:18:08.078 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.078 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.078 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.078 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.078 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.078 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.078 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.078 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.338 11:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.598 00:18:08.598 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.598 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.598 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.859 { 00:18:08.859 "cntlid": 127, 00:18:08.859 "qid": 0, 00:18:08.859 "state": "enabled", 00:18:08.859 "thread": "nvmf_tgt_poll_group_000", 00:18:08.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:08.859 "listen_address": { 00:18:08.859 "trtype": "TCP", 00:18:08.859 "adrfam": "IPv4", 00:18:08.859 "traddr": "10.0.0.2", 00:18:08.859 "trsvcid": "4420" 00:18:08.859 }, 00:18:08.859 "peer_address": { 00:18:08.859 "trtype": "TCP", 00:18:08.859 "adrfam": "IPv4", 00:18:08.859 "traddr": "10.0.0.1", 00:18:08.859 "trsvcid": "52870" 00:18:08.859 }, 00:18:08.859 "auth": { 00:18:08.859 "state": "completed", 00:18:08.859 "digest": "sha512", 00:18:08.859 "dhgroup": "ffdhe4096" 00:18:08.859 } 00:18:08.859 } 00:18:08.859 ]' 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.859 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.119 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:09.119 11:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.690 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.950 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.210 00:18:10.210 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.210 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.210 11:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.470 { 00:18:10.470 "cntlid": 129, 00:18:10.470 "qid": 0, 00:18:10.470 "state": "enabled", 00:18:10.470 "thread": "nvmf_tgt_poll_group_000", 00:18:10.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:10.470 "listen_address": { 00:18:10.470 "trtype": "TCP", 00:18:10.470 "adrfam": "IPv4", 00:18:10.470 "traddr": "10.0.0.2", 00:18:10.470 "trsvcid": "4420" 00:18:10.470 }, 00:18:10.470 "peer_address": { 00:18:10.470 "trtype": "TCP", 00:18:10.470 "adrfam": "IPv4", 00:18:10.470 "traddr": "10.0.0.1", 00:18:10.470 "trsvcid": "52884" 00:18:10.470 }, 00:18:10.470 "auth": { 00:18:10.470 "state": "completed", 00:18:10.470 "digest": "sha512", 00:18:10.470 "dhgroup": "ffdhe6144" 00:18:10.470 } 00:18:10.470 } 00:18:10.470 ]' 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.470 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.731 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.731 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.731 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.731 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:18:10.731 11:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.672 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.673 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.673 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.673 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.673 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.673 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.673 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.673 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.933 00:18:11.933 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.933 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.933 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.193 { 00:18:12.193 "cntlid": 131, 00:18:12.193 "qid": 0, 00:18:12.193 "state": "enabled", 00:18:12.193 "thread": "nvmf_tgt_poll_group_000", 00:18:12.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:12.193 "listen_address": { 00:18:12.193 "trtype": "TCP", 00:18:12.193 "adrfam": "IPv4", 00:18:12.193 "traddr": "10.0.0.2", 00:18:12.193 "trsvcid": "4420" 00:18:12.193 }, 00:18:12.193 "peer_address": { 00:18:12.193 "trtype": "TCP", 00:18:12.193 "adrfam": "IPv4", 00:18:12.193 "traddr": "10.0.0.1", 00:18:12.193 "trsvcid": "52910" 00:18:12.193 }, 00:18:12.193 "auth": { 00:18:12.193 "state": "completed", 00:18:12.193 "digest": "sha512", 00:18:12.193 "dhgroup": "ffdhe6144" 00:18:12.193 } 00:18:12.193 } 00:18:12.193 ]' 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.193 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.453 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.453 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.453 11:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.453 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:18:12.453 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.395 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.656 00:18:13.656 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.656 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.656 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.916 { 00:18:13.916 "cntlid": 133, 00:18:13.916 "qid": 0, 00:18:13.916 "state": "enabled", 00:18:13.916 "thread": "nvmf_tgt_poll_group_000", 00:18:13.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:13.916 "listen_address": { 00:18:13.916 "trtype": "TCP", 00:18:13.916 "adrfam": "IPv4", 00:18:13.916 "traddr": "10.0.0.2", 00:18:13.916 "trsvcid": "4420" 00:18:13.916 }, 00:18:13.916 "peer_address": { 00:18:13.916 "trtype": "TCP", 00:18:13.916 "adrfam": "IPv4", 00:18:13.916 "traddr": "10.0.0.1", 00:18:13.916 "trsvcid": "52934" 00:18:13.916 }, 00:18:13.916 "auth": { 00:18:13.916 "state": "completed", 00:18:13.916 "digest": "sha512", 00:18:13.916 "dhgroup": "ffdhe6144" 00:18:13.916 } 00:18:13.916 } 00:18:13.916 ]' 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.916 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.177 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.177 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.177 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.177 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:18:14.177 11:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.118 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.379 00:18:15.379 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.379 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.379 11:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.640 { 00:18:15.640 "cntlid": 135, 00:18:15.640 "qid": 0, 00:18:15.640 "state": "enabled", 00:18:15.640 "thread": "nvmf_tgt_poll_group_000", 00:18:15.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:15.640 "listen_address": { 00:18:15.640 "trtype": "TCP", 00:18:15.640 "adrfam": "IPv4", 00:18:15.640 "traddr": "10.0.0.2", 00:18:15.640 "trsvcid": "4420" 00:18:15.640 }, 00:18:15.640 "peer_address": { 00:18:15.640 "trtype": "TCP", 00:18:15.640 "adrfam": "IPv4", 00:18:15.640 "traddr": "10.0.0.1", 00:18:15.640 "trsvcid": "52960" 00:18:15.640 }, 00:18:15.640 "auth": { 00:18:15.640 "state": "completed", 00:18:15.640 "digest": "sha512", 00:18:15.640 "dhgroup": "ffdhe6144" 00:18:15.640 } 00:18:15.640 } 00:18:15.640 ]' 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.640 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.901 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:15.901 11:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.473 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.734 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.305 00:18:17.305 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.305 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.305 11:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.567 { 00:18:17.567 "cntlid": 137, 00:18:17.567 "qid": 0, 00:18:17.567 "state": "enabled", 00:18:17.567 "thread": "nvmf_tgt_poll_group_000", 00:18:17.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:17.567 "listen_address": { 00:18:17.567 "trtype": "TCP", 00:18:17.567 "adrfam": "IPv4", 00:18:17.567 "traddr": "10.0.0.2", 00:18:17.567 "trsvcid": "4420" 00:18:17.567 }, 00:18:17.567 "peer_address": { 00:18:17.567 "trtype": "TCP", 00:18:17.567 "adrfam": "IPv4", 00:18:17.567 "traddr": "10.0.0.1", 00:18:17.567 "trsvcid": "52980" 00:18:17.567 }, 00:18:17.567 "auth": { 00:18:17.567 "state": "completed", 00:18:17.567 "digest": "sha512", 00:18:17.567 "dhgroup": "ffdhe8192" 00:18:17.567 } 00:18:17.567 } 00:18:17.567 ]' 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.567 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.828 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:18:17.828 11:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:18:18.400 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.400 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.400 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.400 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.400 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.400 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.400 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.400 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.661 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.233 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.233 { 00:18:19.233 "cntlid": 139, 00:18:19.233 "qid": 0, 00:18:19.233 "state": "enabled", 00:18:19.233 "thread": "nvmf_tgt_poll_group_000", 00:18:19.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:19.233 "listen_address": { 00:18:19.233 "trtype": "TCP", 00:18:19.233 "adrfam": "IPv4", 00:18:19.233 "traddr": "10.0.0.2", 00:18:19.233 "trsvcid": "4420" 00:18:19.233 }, 00:18:19.233 "peer_address": { 00:18:19.233 "trtype": "TCP", 00:18:19.233 "adrfam": "IPv4", 00:18:19.233 "traddr": "10.0.0.1", 00:18:19.233 "trsvcid": "53878" 00:18:19.233 }, 00:18:19.233 "auth": { 00:18:19.233 "state": "completed", 00:18:19.233 "digest": "sha512", 00:18:19.233 "dhgroup": "ffdhe8192" 00:18:19.233 } 00:18:19.233 } 00:18:19.233 ]' 00:18:19.233 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.494 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.494 11:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.494 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.494 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.494 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.494 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.494 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.755 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:18:19.755 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: --dhchap-ctrl-secret DHHC-1:02:ODI2N2U3MmU4YWI3ZGQyOGE3NzA5ZTc0ZjIzYTY0NGMwMWJhMTdmYTUyZjBhOGJk+BkRmQ==: 00:18:20.326 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.326 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.326 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.326 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.326 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.326 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.326 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.326 11:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.587 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.848 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.109 { 00:18:21.109 "cntlid": 141, 00:18:21.109 "qid": 0, 00:18:21.109 "state": "enabled", 00:18:21.109 "thread": "nvmf_tgt_poll_group_000", 00:18:21.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:21.109 "listen_address": { 00:18:21.109 "trtype": "TCP", 00:18:21.109 "adrfam": "IPv4", 00:18:21.109 "traddr": "10.0.0.2", 00:18:21.109 "trsvcid": "4420" 00:18:21.109 }, 00:18:21.109 "peer_address": { 00:18:21.109 "trtype": "TCP", 00:18:21.109 "adrfam": "IPv4", 00:18:21.109 "traddr": "10.0.0.1", 00:18:21.109 "trsvcid": "53922" 00:18:21.109 }, 00:18:21.109 "auth": { 00:18:21.109 "state": "completed", 00:18:21.109 "digest": "sha512", 00:18:21.109 "dhgroup": "ffdhe8192" 00:18:21.109 } 00:18:21.109 } 00:18:21.109 ]' 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.109 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.370 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.370 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.370 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.370 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.370 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.370 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:18:21.370 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:01:MTMzNjc2NzRmMTQyMTZmNzNjOTc5OGYxNDZiYjBiNGFFHSyW: 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.397 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.663 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.923 { 00:18:22.923 "cntlid": 143, 00:18:22.923 "qid": 0, 00:18:22.923 "state": "enabled", 00:18:22.923 "thread": "nvmf_tgt_poll_group_000", 00:18:22.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:22.923 "listen_address": { 00:18:22.923 "trtype": "TCP", 00:18:22.923 "adrfam": "IPv4", 00:18:22.923 "traddr": "10.0.0.2", 00:18:22.923 "trsvcid": "4420" 00:18:22.923 }, 00:18:22.923 "peer_address": { 00:18:22.923 "trtype": "TCP", 00:18:22.923 "adrfam": "IPv4", 00:18:22.923 "traddr": "10.0.0.1", 00:18:22.923 "trsvcid": "53954" 00:18:22.923 }, 00:18:22.923 "auth": { 00:18:22.923 "state": "completed", 00:18:22.923 "digest": "sha512", 00:18:22.923 "dhgroup": "ffdhe8192" 00:18:22.923 } 00:18:22.923 } 00:18:22.923 ]' 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.923 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.184 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.184 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.184 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.184 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.184 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.184 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:23.184 11:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.123 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.694 00:18:24.694 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.694 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.694 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.694 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.694 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.694 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.694 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.954 { 00:18:24.954 "cntlid": 145, 00:18:24.954 "qid": 0, 00:18:24.954 "state": "enabled", 00:18:24.954 "thread": "nvmf_tgt_poll_group_000", 00:18:24.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:24.954 "listen_address": { 00:18:24.954 "trtype": "TCP", 00:18:24.954 "adrfam": "IPv4", 00:18:24.954 "traddr": "10.0.0.2", 00:18:24.954 "trsvcid": "4420" 00:18:24.954 }, 00:18:24.954 "peer_address": { 00:18:24.954 "trtype": "TCP", 00:18:24.954 "adrfam": "IPv4", 00:18:24.954 "traddr": "10.0.0.1", 00:18:24.954 "trsvcid": "53968" 00:18:24.954 }, 00:18:24.954 "auth": { 00:18:24.954 "state": "completed", 00:18:24.954 "digest": "sha512", 00:18:24.954 "dhgroup": "ffdhe8192" 00:18:24.954 } 00:18:24.954 } 00:18:24.954 ]' 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.954 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.214 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:18:25.214 11:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:NDc3MDM5NDQ3OTRkODk4NTY2OWRlYWZlYTNmYTU4MTE3NWQ3ZGI1YjJlYmIxZGYxrbSTgw==: --dhchap-ctrl-secret DHHC-1:03:YzA4YjA2M2RlYzJlYWU5ODYxYjYyMjYyOTM2NmIzMmI3NmRlYjQ2NjM1MmIzNDVhOTU4YjgyMjFhMWMxM2IxMQq64Kc=: 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:25.784 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:26.354 request: 00:18:26.354 { 00:18:26.354 "name": "nvme0", 00:18:26.354 "trtype": "tcp", 00:18:26.354 "traddr": "10.0.0.2", 00:18:26.354 "adrfam": "ipv4", 00:18:26.354 "trsvcid": "4420", 00:18:26.354 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:26.354 "prchk_reftag": false, 00:18:26.354 "prchk_guard": false, 00:18:26.354 "hdgst": false, 00:18:26.354 "ddgst": false, 00:18:26.354 "dhchap_key": "key2", 00:18:26.354 "allow_unrecognized_csi": false, 00:18:26.354 "method": "bdev_nvme_attach_controller", 00:18:26.354 "req_id": 1 00:18:26.354 } 00:18:26.354 Got JSON-RPC error response 00:18:26.354 response: 00:18:26.354 { 00:18:26.354 "code": -5, 00:18:26.354 "message": "Input/output error" 00:18:26.354 } 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.354 11:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.615 request: 00:18:26.615 { 00:18:26.615 "name": "nvme0", 00:18:26.615 "trtype": "tcp", 00:18:26.615 "traddr": "10.0.0.2", 00:18:26.615 "adrfam": "ipv4", 00:18:26.615 "trsvcid": "4420", 00:18:26.615 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:26.615 "prchk_reftag": false, 00:18:26.615 "prchk_guard": false, 00:18:26.615 "hdgst": false, 00:18:26.615 "ddgst": false, 00:18:26.615 "dhchap_key": "key1", 00:18:26.615 "dhchap_ctrlr_key": "ckey2", 00:18:26.615 "allow_unrecognized_csi": false, 00:18:26.615 "method": "bdev_nvme_attach_controller", 00:18:26.615 "req_id": 1 00:18:26.615 } 00:18:26.615 Got JSON-RPC error response 00:18:26.615 response: 00:18:26.615 { 00:18:26.615 "code": -5, 00:18:26.615 "message": "Input/output error" 00:18:26.615 } 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.615 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.875 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.134 request: 00:18:27.134 { 00:18:27.134 "name": "nvme0", 00:18:27.134 "trtype": "tcp", 00:18:27.134 "traddr": "10.0.0.2", 00:18:27.134 "adrfam": "ipv4", 00:18:27.134 "trsvcid": "4420", 00:18:27.134 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:27.134 "prchk_reftag": false, 00:18:27.134 "prchk_guard": false, 00:18:27.134 "hdgst": false, 00:18:27.134 "ddgst": false, 00:18:27.134 "dhchap_key": "key1", 00:18:27.134 "dhchap_ctrlr_key": "ckey1", 00:18:27.134 "allow_unrecognized_csi": false, 00:18:27.134 "method": "bdev_nvme_attach_controller", 00:18:27.134 "req_id": 1 00:18:27.134 } 00:18:27.134 Got JSON-RPC error response 00:18:27.134 response: 00:18:27.134 { 00:18:27.134 "code": -5, 00:18:27.134 "message": "Input/output error" 00:18:27.134 } 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1900051 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1900051 ']' 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1900051 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:27.134 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.135 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1900051 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1900051' 00:18:27.394 killing process with pid 1900051 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1900051 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1900051 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1926372 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1926372 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1926372 ']' 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.394 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.395 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.395 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.395 11:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1926372 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1926372 ']' 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.336 11:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.336 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.336 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:28.336 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:28.336 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.336 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 null0 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jt4 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.D4h ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D4h 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.1Z7 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Fst ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fst 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AW7 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.SE8 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SE8 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7sl 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.597 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.538 nvme0n1 00:18:29.538 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.538 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.538 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.538 { 00:18:29.538 "cntlid": 1, 00:18:29.538 "qid": 0, 00:18:29.538 "state": "enabled", 00:18:29.538 "thread": "nvmf_tgt_poll_group_000", 00:18:29.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:29.538 "listen_address": { 00:18:29.538 "trtype": "TCP", 00:18:29.538 "adrfam": "IPv4", 00:18:29.538 "traddr": "10.0.0.2", 00:18:29.538 "trsvcid": "4420" 00:18:29.538 }, 00:18:29.538 "peer_address": { 00:18:29.538 "trtype": "TCP", 00:18:29.538 "adrfam": "IPv4", 00:18:29.538 "traddr": "10.0.0.1", 00:18:29.538 "trsvcid": "32908" 00:18:29.538 }, 00:18:29.538 "auth": { 00:18:29.538 "state": "completed", 00:18:29.538 "digest": "sha512", 00:18:29.538 "dhgroup": "ffdhe8192" 00:18:29.538 } 00:18:29.538 } 00:18:29.538 ]' 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.538 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.798 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.798 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.798 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.798 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.798 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.798 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:29.798 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:30.737 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.738 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.998 request: 00:18:30.998 { 00:18:30.998 "name": "nvme0", 00:18:30.998 "trtype": "tcp", 00:18:30.998 "traddr": "10.0.0.2", 00:18:30.998 "adrfam": "ipv4", 00:18:30.998 "trsvcid": "4420", 00:18:30.998 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:30.998 "prchk_reftag": false, 00:18:30.998 "prchk_guard": false, 00:18:30.998 "hdgst": false, 00:18:30.998 "ddgst": false, 00:18:30.998 "dhchap_key": "key3", 00:18:30.998 "allow_unrecognized_csi": false, 00:18:30.998 "method": "bdev_nvme_attach_controller", 00:18:30.998 "req_id": 1 00:18:30.998 } 00:18:30.998 Got JSON-RPC error response 00:18:30.998 response: 00:18:30.998 { 00:18:30.998 "code": -5, 00:18:30.998 "message": "Input/output error" 00:18:30.998 } 00:18:30.998 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:30.998 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.998 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.998 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.998 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:30.998 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:30.998 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:30.998 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.259 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.520 request: 00:18:31.520 { 00:18:31.520 "name": "nvme0", 00:18:31.520 "trtype": "tcp", 00:18:31.520 "traddr": "10.0.0.2", 00:18:31.520 "adrfam": "ipv4", 00:18:31.520 "trsvcid": "4420", 00:18:31.520 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:31.520 "prchk_reftag": false, 00:18:31.521 "prchk_guard": false, 00:18:31.521 "hdgst": false, 00:18:31.521 "ddgst": false, 00:18:31.521 "dhchap_key": "key3", 00:18:31.521 "allow_unrecognized_csi": false, 00:18:31.521 "method": "bdev_nvme_attach_controller", 00:18:31.521 "req_id": 1 00:18:31.521 } 00:18:31.521 Got JSON-RPC error response 00:18:31.521 response: 00:18:31.521 { 00:18:31.521 "code": -5, 00:18:31.521 "message": "Input/output error" 00:18:31.521 } 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.521 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.521 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.092 request: 00:18:32.092 { 00:18:32.092 "name": "nvme0", 00:18:32.092 "trtype": "tcp", 00:18:32.092 "traddr": "10.0.0.2", 00:18:32.092 "adrfam": "ipv4", 00:18:32.092 "trsvcid": "4420", 00:18:32.092 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:32.092 "prchk_reftag": false, 00:18:32.092 "prchk_guard": false, 00:18:32.092 "hdgst": false, 00:18:32.092 "ddgst": false, 00:18:32.092 "dhchap_key": "key0", 00:18:32.092 "dhchap_ctrlr_key": "key1", 00:18:32.092 "allow_unrecognized_csi": false, 00:18:32.092 "method": "bdev_nvme_attach_controller", 00:18:32.092 "req_id": 1 00:18:32.092 } 00:18:32.092 Got JSON-RPC error response 00:18:32.092 response: 00:18:32.092 { 00:18:32.092 "code": -5, 00:18:32.092 "message": "Input/output error" 00:18:32.092 } 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:32.092 nvme0n1 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:32.092 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.352 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.352 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.352 11:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.613 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:32.613 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.613 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.613 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.613 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:32.613 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:32.613 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:33.184 nvme0n1 00:18:33.184 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:33.184 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:33.184 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.444 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.444 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:33.444 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.444 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.444 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.444 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:33.444 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:33.444 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.705 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.705 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:33.705 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: --dhchap-ctrl-secret DHHC-1:03:OTc2ZWZiZTBhYmNmMzUzYmQwZTg4YWZmMDE1ZmQ2NTI2M2RkZDBjMzE1ZmMwOTIwYTM3MWYzNTAyZDUyN2UwZp6eYuc=: 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.275 11:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:34.536 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:35.107 request: 00:18:35.107 { 00:18:35.107 "name": "nvme0", 00:18:35.107 "trtype": "tcp", 00:18:35.107 "traddr": "10.0.0.2", 00:18:35.107 "adrfam": "ipv4", 00:18:35.107 "trsvcid": "4420", 00:18:35.107 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:35.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:35.107 "prchk_reftag": false, 00:18:35.107 "prchk_guard": false, 00:18:35.107 "hdgst": false, 00:18:35.107 "ddgst": false, 00:18:35.107 "dhchap_key": "key1", 00:18:35.107 "allow_unrecognized_csi": false, 00:18:35.107 "method": "bdev_nvme_attach_controller", 00:18:35.107 "req_id": 1 00:18:35.107 } 00:18:35.107 Got JSON-RPC error response 00:18:35.107 response: 00:18:35.107 { 00:18:35.107 "code": -5, 00:18:35.107 "message": "Input/output error" 00:18:35.107 } 00:18:35.107 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:35.107 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:35.107 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:35.107 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:35.107 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.107 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.107 11:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.686 nvme0n1 00:18:35.686 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:35.686 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:35.686 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:35.950 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:36.210 nvme0n1 00:18:36.210 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:36.210 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:36.210 11:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.470 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.471 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.471 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: '' 2s 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: ]] 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Mjc4OGFjOGYzYjM2NDhkZjBiZDMxY2YwNDVlYTRjODaFZEms: 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:36.731 11:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: 2s 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: ]] 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGYzYjhhMzVlYzQ5MmNkN2M3MjA5ODY2NTE3NzEzY2NiMjUzZWJhYmZiZDRkY2Q1YfWpxQ==: 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:38.644 11:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.182 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.443 nvme0n1 00:18:41.443 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.443 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.443 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.443 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.443 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.443 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.012 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:42.012 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.012 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:42.272 11:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.531 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.101 request: 00:18:43.101 { 00:18:43.101 "name": "nvme0", 00:18:43.101 "dhchap_key": "key1", 00:18:43.101 "dhchap_ctrlr_key": "key3", 00:18:43.101 "method": "bdev_nvme_set_keys", 00:18:43.101 "req_id": 1 00:18:43.101 } 00:18:43.101 Got JSON-RPC error response 00:18:43.101 response: 00:18:43.101 { 00:18:43.101 "code": -13, 00:18:43.101 "message": "Permission denied" 00:18:43.101 } 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:43.101 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:44.508 11:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:45.079 nvme0n1 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:45.079 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:45.652 request: 00:18:45.652 { 00:18:45.652 "name": "nvme0", 00:18:45.652 "dhchap_key": "key2", 00:18:45.652 "dhchap_ctrlr_key": "key0", 00:18:45.652 "method": "bdev_nvme_set_keys", 00:18:45.652 "req_id": 1 00:18:45.652 } 00:18:45.652 Got JSON-RPC error response 00:18:45.652 response: 00:18:45.652 { 00:18:45.652 "code": -13, 00:18:45.652 "message": "Permission denied" 00:18:45.652 } 00:18:45.652 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:45.652 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:45.652 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:45.652 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:45.652 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:45.652 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:45.652 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.940 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:45.940 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:46.880 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:46.880 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1900090 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1900090 ']' 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1900090 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.881 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1900090 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1900090' 00:18:47.146 killing process with pid 1900090 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1900090 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1900090 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.146 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.146 rmmod nvme_tcp 00:18:47.146 rmmod nvme_fabrics 00:18:47.146 rmmod nvme_keyring 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1926372 ']' 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1926372 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1926372 ']' 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1926372 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1926372 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1926372' 00:18:47.406 killing process with pid 1926372 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1926372 00:18:47.406 11:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1926372 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.406 11:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jt4 /tmp/spdk.key-sha256.1Z7 /tmp/spdk.key-sha384.AW7 /tmp/spdk.key-sha512.7sl /tmp/spdk.key-sha512.D4h /tmp/spdk.key-sha384.Fst /tmp/spdk.key-sha256.SE8 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:49.962 00:18:49.962 real 2m37.027s 00:18:49.962 user 5m52.957s 00:18:49.962 sys 0m24.823s 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.962 ************************************ 00:18:49.962 END TEST nvmf_auth_target 00:18:49.962 ************************************ 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.962 ************************************ 00:18:49.962 START TEST nvmf_bdevio_no_huge 00:18:49.962 ************************************ 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:49.962 * Looking for test storage... 00:18:49.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:49.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.962 --rc genhtml_branch_coverage=1 00:18:49.962 --rc genhtml_function_coverage=1 00:18:49.962 --rc genhtml_legend=1 00:18:49.962 --rc geninfo_all_blocks=1 00:18:49.962 --rc geninfo_unexecuted_blocks=1 00:18:49.962 00:18:49.962 ' 00:18:49.962 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:49.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.963 --rc genhtml_branch_coverage=1 00:18:49.963 --rc genhtml_function_coverage=1 00:18:49.963 --rc genhtml_legend=1 00:18:49.963 --rc geninfo_all_blocks=1 00:18:49.963 --rc geninfo_unexecuted_blocks=1 00:18:49.963 00:18:49.963 ' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:49.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.963 --rc genhtml_branch_coverage=1 00:18:49.963 --rc genhtml_function_coverage=1 00:18:49.963 --rc genhtml_legend=1 00:18:49.963 --rc geninfo_all_blocks=1 00:18:49.963 --rc geninfo_unexecuted_blocks=1 00:18:49.963 00:18:49.963 ' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:49.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.963 --rc genhtml_branch_coverage=1 00:18:49.963 --rc genhtml_function_coverage=1 00:18:49.963 --rc genhtml_legend=1 00:18:49.963 --rc geninfo_all_blocks=1 00:18:49.963 --rc geninfo_unexecuted_blocks=1 00:18:49.963 00:18:49.963 ' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:49.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:49.963 11:55:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:58.117 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:58.117 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:58.117 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:58.118 Found net devices under 0000:31:00.0: cvl_0_0 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:58.118 Found net devices under 0000:31:00.1: cvl_0_1 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:58.118 11:55:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:58.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:18:58.118 00:18:58.118 --- 10.0.0.2 ping statistics --- 00:18:58.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.118 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:18:58.118 00:18:58.118 --- 10.0.0.1 ping statistics --- 00:18:58.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.118 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1934596 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1934596 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1934596 ']' 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.118 11:56:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.118 [2024-10-11 11:56:00.179788] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:18:58.118 [2024-10-11 11:56:00.179858] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:58.118 [2024-10-11 11:56:00.277406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.118 [2024-10-11 11:56:00.337089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.118 [2024-10-11 11:56:00.337129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.118 [2024-10-11 11:56:00.337137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.118 [2024-10-11 11:56:00.337144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.118 [2024-10-11 11:56:00.337151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.118 [2024-10-11 11:56:00.338663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:58.118 [2024-10-11 11:56:00.338822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:58.118 [2024-10-11 11:56:00.338980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.118 [2024-10-11 11:56:00.338980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.381 [2024-10-11 11:56:01.057518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.381 Malloc0 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.381 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.643 [2024-10-11 11:56:01.111555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:58.643 { 00:18:58.643 "params": { 00:18:58.643 "name": "Nvme$subsystem", 00:18:58.643 "trtype": "$TEST_TRANSPORT", 00:18:58.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.643 "adrfam": "ipv4", 00:18:58.643 "trsvcid": "$NVMF_PORT", 00:18:58.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.643 "hdgst": ${hdgst:-false}, 00:18:58.643 "ddgst": ${ddgst:-false} 00:18:58.643 }, 00:18:58.643 "method": "bdev_nvme_attach_controller" 00:18:58.643 } 00:18:58.643 EOF 00:18:58.643 )") 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:58.643 11:56:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:58.643 "params": { 00:18:58.643 "name": "Nvme1", 00:18:58.643 "trtype": "tcp", 00:18:58.643 "traddr": "10.0.0.2", 00:18:58.643 "adrfam": "ipv4", 00:18:58.643 "trsvcid": "4420", 00:18:58.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.643 "hdgst": false, 00:18:58.643 "ddgst": false 00:18:58.643 }, 00:18:58.643 "method": "bdev_nvme_attach_controller" 00:18:58.643 }' 00:18:58.643 [2024-10-11 11:56:01.170811] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:18:58.643 [2024-10-11 11:56:01.170878] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1934947 ] 00:18:58.643 [2024-10-11 11:56:01.259411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:58.643 [2024-10-11 11:56:01.319945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.643 [2024-10-11 11:56:01.320123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.643 [2024-10-11 11:56:01.320145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.216 I/O targets: 00:18:59.216 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:59.216 00:18:59.216 00:18:59.216 CUnit - A unit testing framework for C - Version 2.1-3 00:18:59.216 http://cunit.sourceforge.net/ 00:18:59.216 00:18:59.216 00:18:59.216 Suite: bdevio tests on: Nvme1n1 00:18:59.216 Test: blockdev write read block ...passed 00:18:59.216 Test: blockdev write zeroes read block ...passed 00:18:59.216 Test: blockdev write zeroes read no split ...passed 00:18:59.216 Test: blockdev write zeroes read split ...passed 00:18:59.216 Test: blockdev write zeroes read split partial ...passed 00:18:59.216 Test: blockdev reset ...[2024-10-11 11:56:01.834661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:59.216 [2024-10-11 11:56:01.834764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbd4f0 (9): Bad file descriptor 00:18:59.216 [2024-10-11 11:56:01.851005] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:59.216 passed 00:18:59.216 Test: blockdev write read 8 blocks ...passed 00:18:59.216 Test: blockdev write read size > 128k ...passed 00:18:59.216 Test: blockdev write read invalid size ...passed 00:18:59.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:59.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:59.478 Test: blockdev write read max offset ...passed 00:18:59.478 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.478 Test: blockdev writev readv 8 blocks ...passed 00:18:59.478 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.478 Test: blockdev writev readv block ...passed 00:18:59.478 Test: blockdev writev readv size > 128k ...passed 00:18:59.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.478 Test: blockdev comparev and writev ...[2024-10-11 11:56:02.066894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.478 [2024-10-11 11:56:02.066942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.066959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.478 [2024-10-11 11:56:02.066968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.067287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.478 [2024-10-11 11:56:02.067302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.067316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.478 [2024-10-11 11:56:02.067324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.067594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.478 [2024-10-11 11:56:02.067607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.067621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.478 [2024-10-11 11:56:02.067630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.067920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.478 [2024-10-11 11:56:02.067936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.067950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.478 [2024-10-11 11:56:02.067958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:59.478 passed 00:18:59.478 Test: blockdev nvme passthru rw ...passed 00:18:59.478 Test: blockdev nvme passthru vendor specific ...[2024-10-11 11:56:02.152450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.478 [2024-10-11 11:56:02.152472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.152577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.478 [2024-10-11 11:56:02.152588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.152700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.478 [2024-10-11 11:56:02.152712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:59.478 [2024-10-11 11:56:02.152827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.478 [2024-10-11 11:56:02.152846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:59.478 passed 00:18:59.478 Test: blockdev nvme admin passthru ...passed 00:18:59.740 Test: blockdev copy ...passed 00:18:59.740 00:18:59.740 Run Summary: Type Total Ran Passed Failed Inactive 00:18:59.740 suites 1 1 n/a 0 0 00:18:59.740 tests 23 23 23 0 0 00:18:59.740 asserts 152 152 152 0 n/a 00:18:59.740 00:18:59.740 Elapsed time = 1.101 seconds 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:00.002 rmmod nvme_tcp 00:19:00.002 rmmod nvme_fabrics 00:19:00.002 rmmod nvme_keyring 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1934596 ']' 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1934596 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1934596 ']' 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1934596 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1934596 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1934596' 00:19:00.002 killing process with pid 1934596 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1934596 00:19:00.002 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1934596 00:19:00.263 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:00.263 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:00.263 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:00.263 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:00.524 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:19:00.524 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:19:00.524 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:00.524 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:00.524 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:00.524 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.524 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.524 11:56:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.490 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:02.490 00:19:02.490 real 0m12.847s 00:19:02.490 user 0m15.002s 00:19:02.490 sys 0m6.832s 00:19:02.490 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.490 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.490 ************************************ 00:19:02.490 END TEST nvmf_bdevio_no_huge 00:19:02.490 ************************************ 00:19:02.490 11:56:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:02.490 11:56:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:02.490 11:56:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.490 11:56:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.490 ************************************ 00:19:02.490 START TEST nvmf_tls 00:19:02.490 ************************************ 00:19:02.490 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:02.752 * Looking for test storage... 00:19:02.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:02.752 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:02.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.753 --rc genhtml_branch_coverage=1 00:19:02.753 --rc genhtml_function_coverage=1 00:19:02.753 --rc genhtml_legend=1 00:19:02.753 --rc geninfo_all_blocks=1 00:19:02.753 --rc geninfo_unexecuted_blocks=1 00:19:02.753 00:19:02.753 ' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:02.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.753 --rc genhtml_branch_coverage=1 00:19:02.753 --rc genhtml_function_coverage=1 00:19:02.753 --rc genhtml_legend=1 00:19:02.753 --rc geninfo_all_blocks=1 00:19:02.753 --rc geninfo_unexecuted_blocks=1 00:19:02.753 00:19:02.753 ' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:02.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.753 --rc genhtml_branch_coverage=1 00:19:02.753 --rc genhtml_function_coverage=1 00:19:02.753 --rc genhtml_legend=1 00:19:02.753 --rc geninfo_all_blocks=1 00:19:02.753 --rc geninfo_unexecuted_blocks=1 00:19:02.753 00:19:02.753 ' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:02.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.753 --rc genhtml_branch_coverage=1 00:19:02.753 --rc genhtml_function_coverage=1 00:19:02.753 --rc genhtml_legend=1 00:19:02.753 --rc geninfo_all_blocks=1 00:19:02.753 --rc geninfo_unexecuted_blocks=1 00:19:02.753 00:19:02.753 ' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.753 11:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:10.895 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:10.896 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:10.896 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:10.896 Found net devices under 0000:31:00.0: cvl_0_0 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:10.896 Found net devices under 0000:31:00.1: cvl_0_1 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:10.896 11:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:10.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:19:10.896 00:19:10.896 --- 10.0.0.2 ping statistics --- 00:19:10.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.896 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:19:10.896 00:19:10.896 --- 10.0.0.1 ping statistics --- 00:19:10.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.896 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1939513 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1939513 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1939513 ']' 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.896 11:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.896 [2024-10-11 11:56:13.202443] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:10.896 [2024-10-11 11:56:13.202508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.896 [2024-10-11 11:56:13.294775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.896 [2024-10-11 11:56:13.346094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.896 [2024-10-11 11:56:13.346140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.896 [2024-10-11 11:56:13.346148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.896 [2024-10-11 11:56:13.346156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.896 [2024-10-11 11:56:13.346162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.896 [2024-10-11 11:56:13.346946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.467 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.467 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:11.467 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:11.467 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:11.467 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.467 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.467 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:11.467 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:11.728 true 00:19:11.728 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:11.728 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:11.988 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:11.988 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:11.988 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:11.988 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:11.988 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:12.249 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:12.249 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:12.249 11:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:12.509 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:12.509 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:12.509 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:12.509 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:12.509 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:12.509 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:12.770 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:12.770 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:12.770 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:13.030 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:13.030 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:13.290 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:13.290 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:13.290 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:13.290 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:13.290 11:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.CrouAOUbAB 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.2BfsVvdMmK 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.CrouAOUbAB 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.2BfsVvdMmK 00:19:13.549 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:13.836 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:14.144 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.CrouAOUbAB 00:19:14.144 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CrouAOUbAB 00:19:14.144 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:14.144 [2024-10-11 11:56:16.721394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.144 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:14.431 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:14.431 [2024-10-11 11:56:17.042175] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.431 [2024-10-11 11:56:17.042368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.431 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:14.691 malloc0 00:19:14.691 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.691 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CrouAOUbAB 00:19:14.951 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.212 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.CrouAOUbAB 00:19:25.225 Initializing NVMe Controllers 00:19:25.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:25.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:25.225 Initialization complete. Launching workers. 00:19:25.225 ======================================================== 00:19:25.225 Latency(us) 00:19:25.225 Device Information : IOPS MiB/s Average min max 00:19:25.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18516.39 72.33 3456.64 1057.58 4493.65 00:19:25.225 ======================================================== 00:19:25.225 Total : 18516.39 72.33 3456.64 1057.58 4493.65 00:19:25.225 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CrouAOUbAB 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CrouAOUbAB 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1942424 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1942424 /var/tmp/bdevperf.sock 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1942424 ']' 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.225 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.226 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.226 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.226 [2024-10-11 11:56:27.890166] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:25.226 [2024-10-11 11:56:27.890223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942424 ] 00:19:25.486 [2024-10-11 11:56:27.968001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.486 [2024-10-11 11:56:28.003042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.058 11:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.058 11:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.058 11:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CrouAOUbAB 00:19:26.319 11:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:26.319 [2024-10-11 11:56:28.990717] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.580 TLSTESTn1 00:19:26.580 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:26.580 Running I/O for 10 seconds... 00:19:28.905 5984.00 IOPS, 23.38 MiB/s [2024-10-11T09:56:32.179Z] 6055.50 IOPS, 23.65 MiB/s [2024-10-11T09:56:33.560Z] 5825.00 IOPS, 22.75 MiB/s [2024-10-11T09:56:34.501Z] 5818.25 IOPS, 22.73 MiB/s [2024-10-11T09:56:35.441Z] 5955.80 IOPS, 23.26 MiB/s [2024-10-11T09:56:36.381Z] 5818.33 IOPS, 22.73 MiB/s [2024-10-11T09:56:37.321Z] 5877.57 IOPS, 22.96 MiB/s [2024-10-11T09:56:38.264Z] 5822.12 IOPS, 22.74 MiB/s [2024-10-11T09:56:39.204Z] 5883.89 IOPS, 22.98 MiB/s [2024-10-11T09:56:39.465Z] 5822.00 IOPS, 22.74 MiB/s 00:19:36.762 Latency(us) 00:19:36.762 [2024-10-11T09:56:39.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.762 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.762 Verification LBA range: start 0x0 length 0x2000 00:19:36.762 TLSTESTn1 : 10.05 5808.00 22.69 0.00 0.00 21975.97 5406.72 45219.84 00:19:36.762 [2024-10-11T09:56:39.465Z] =================================================================================================================== 00:19:36.762 [2024-10-11T09:56:39.465Z] Total : 5808.00 22.69 0.00 0.00 21975.97 5406.72 45219.84 00:19:36.762 { 00:19:36.762 "results": [ 00:19:36.762 { 00:19:36.762 "job": "TLSTESTn1", 00:19:36.762 "core_mask": "0x4", 00:19:36.762 "workload": "verify", 00:19:36.762 "status": "finished", 00:19:36.762 "verify_range": { 00:19:36.762 "start": 0, 00:19:36.762 "length": 8192 00:19:36.762 }, 00:19:36.762 "queue_depth": 128, 00:19:36.762 "io_size": 4096, 00:19:36.762 "runtime": 10.046136, 00:19:36.762 "iops": 5808.004191860433, 00:19:36.762 "mibps": 22.687516374454816, 00:19:36.762 "io_failed": 0, 00:19:36.762 "io_timeout": 0, 00:19:36.762 "avg_latency_us": 21975.968739288408, 00:19:36.762 "min_latency_us": 5406.72, 00:19:36.762 "max_latency_us": 45219.84 00:19:36.762 } 00:19:36.762 ], 00:19:36.762 "core_count": 1 00:19:36.762 } 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1942424 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1942424 ']' 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1942424 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1942424 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1942424' 00:19:36.762 killing process with pid 1942424 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1942424 00:19:36.762 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.762 00:19:36.762 Latency(us) 00:19:36.762 [2024-10-11T09:56:39.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.762 [2024-10-11T09:56:39.465Z] =================================================================================================================== 00:19:36.762 [2024-10-11T09:56:39.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1942424 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BfsVvdMmK 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:36.762 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BfsVvdMmK 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BfsVvdMmK 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2BfsVvdMmK 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1944682 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1944682 /var/tmp/bdevperf.sock 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1944682 ']' 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.763 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.023 [2024-10-11 11:56:39.482076] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:37.023 [2024-10-11 11:56:39.482130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944682 ] 00:19:37.023 [2024-10-11 11:56:39.560130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.023 [2024-10-11 11:56:39.588778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.593 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.593 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.593 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2BfsVvdMmK 00:19:37.853 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.113 [2024-10-11 11:56:40.615455] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.113 [2024-10-11 11:56:40.625855] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:38.113 [2024-10-11 11:56:40.626637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244ee60 (107): Transport endpoint is not connected 00:19:38.113 [2024-10-11 11:56:40.627633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244ee60 (9): Bad file descriptor 00:19:38.113 [2024-10-11 11:56:40.628635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.113 [2024-10-11 11:56:40.628643] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:38.113 [2024-10-11 11:56:40.628650] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:38.113 [2024-10-11 11:56:40.628658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.113 request: 00:19:38.113 { 00:19:38.113 "name": "TLSTEST", 00:19:38.113 "trtype": "tcp", 00:19:38.113 "traddr": "10.0.0.2", 00:19:38.113 "adrfam": "ipv4", 00:19:38.113 "trsvcid": "4420", 00:19:38.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.113 "prchk_reftag": false, 00:19:38.113 "prchk_guard": false, 00:19:38.113 "hdgst": false, 00:19:38.113 "ddgst": false, 00:19:38.113 "psk": "key0", 00:19:38.113 "allow_unrecognized_csi": false, 00:19:38.113 "method": "bdev_nvme_attach_controller", 00:19:38.113 "req_id": 1 00:19:38.113 } 00:19:38.113 Got JSON-RPC error response 00:19:38.113 response: 00:19:38.113 { 00:19:38.113 "code": -5, 00:19:38.113 "message": "Input/output error" 00:19:38.113 } 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1944682 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1944682 ']' 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1944682 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1944682 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1944682' 00:19:38.113 killing process with pid 1944682 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1944682 00:19:38.113 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.113 00:19:38.113 Latency(us) 00:19:38.113 [2024-10-11T09:56:40.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.113 [2024-10-11T09:56:40.816Z] =================================================================================================================== 00:19:38.113 [2024-10-11T09:56:40.816Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1944682 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:38.113 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CrouAOUbAB 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CrouAOUbAB 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.CrouAOUbAB 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CrouAOUbAB 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1944877 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1944877 /var/tmp/bdevperf.sock 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1944877 ']' 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.374 11:56:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.374 [2024-10-11 11:56:40.872704] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:38.374 [2024-10-11 11:56:40.872757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944877 ] 00:19:38.374 [2024-10-11 11:56:40.950685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.374 [2024-10-11 11:56:40.979550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CrouAOUbAB 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:39.316 [2024-10-11 11:56:41.974317] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.316 [2024-10-11 11:56:41.979060] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:39.316 [2024-10-11 11:56:41.979087] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:39.316 [2024-10-11 11:56:41.979106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:39.316 [2024-10-11 11:56:41.979493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a77e60 (107): Transport endpoint is not connected 00:19:39.316 [2024-10-11 11:56:41.980488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a77e60 (9): Bad file descriptor 00:19:39.316 [2024-10-11 11:56:41.981490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:39.316 [2024-10-11 11:56:41.981497] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:39.316 [2024-10-11 11:56:41.981504] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:39.316 [2024-10-11 11:56:41.981512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:39.316 request: 00:19:39.316 { 00:19:39.316 "name": "TLSTEST", 00:19:39.316 "trtype": "tcp", 00:19:39.316 "traddr": "10.0.0.2", 00:19:39.316 "adrfam": "ipv4", 00:19:39.316 "trsvcid": "4420", 00:19:39.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.316 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:39.316 "prchk_reftag": false, 00:19:39.316 "prchk_guard": false, 00:19:39.316 "hdgst": false, 00:19:39.316 "ddgst": false, 00:19:39.316 "psk": "key0", 00:19:39.316 "allow_unrecognized_csi": false, 00:19:39.316 "method": "bdev_nvme_attach_controller", 00:19:39.316 "req_id": 1 00:19:39.316 } 00:19:39.316 Got JSON-RPC error response 00:19:39.316 response: 00:19:39.316 { 00:19:39.316 "code": -5, 00:19:39.316 "message": "Input/output error" 00:19:39.316 } 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1944877 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1944877 ']' 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1944877 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:39.316 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.316 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1944877 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1944877' 00:19:39.577 killing process with pid 1944877 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1944877 00:19:39.577 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.577 00:19:39.577 Latency(us) 00:19:39.577 [2024-10-11T09:56:42.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.577 [2024-10-11T09:56:42.280Z] =================================================================================================================== 00:19:39.577 [2024-10-11T09:56:42.280Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1944877 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CrouAOUbAB 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CrouAOUbAB 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.CrouAOUbAB 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CrouAOUbAB 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1945133 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1945133 /var/tmp/bdevperf.sock 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1945133 ']' 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.577 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.577 [2024-10-11 11:56:42.209455] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:39.577 [2024-10-11 11:56:42.209510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1945133 ] 00:19:39.838 [2024-10-11 11:56:42.285484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.838 [2024-10-11 11:56:42.313606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.409 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.409 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:40.409 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CrouAOUbAB 00:19:40.670 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.670 [2024-10-11 11:56:43.292082] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.670 [2024-10-11 11:56:43.302261] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:40.670 [2024-10-11 11:56:43.302278] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:40.670 [2024-10-11 11:56:43.302297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:40.670 [2024-10-11 11:56:43.303189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20e60 (107): Transport endpoint is not connected 00:19:40.670 [2024-10-11 11:56:43.304185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20e60 (9): Bad file descriptor 00:19:40.670 [2024-10-11 11:56:43.305187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:40.670 [2024-10-11 11:56:43.305196] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:40.670 [2024-10-11 11:56:43.305202] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:40.670 [2024-10-11 11:56:43.305210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:40.670 request: 00:19:40.670 { 00:19:40.670 "name": "TLSTEST", 00:19:40.670 "trtype": "tcp", 00:19:40.670 "traddr": "10.0.0.2", 00:19:40.670 "adrfam": "ipv4", 00:19:40.670 "trsvcid": "4420", 00:19:40.670 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:40.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.670 "prchk_reftag": false, 00:19:40.670 "prchk_guard": false, 00:19:40.670 "hdgst": false, 00:19:40.670 "ddgst": false, 00:19:40.670 "psk": "key0", 00:19:40.670 "allow_unrecognized_csi": false, 00:19:40.670 "method": "bdev_nvme_attach_controller", 00:19:40.670 "req_id": 1 00:19:40.670 } 00:19:40.670 Got JSON-RPC error response 00:19:40.670 response: 00:19:40.670 { 00:19:40.670 "code": -5, 00:19:40.670 "message": "Input/output error" 00:19:40.670 } 00:19:40.670 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1945133 00:19:40.670 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1945133 ']' 00:19:40.670 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1945133 00:19:40.670 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:40.670 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.670 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1945133 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1945133' 00:19:40.931 killing process with pid 1945133 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1945133 00:19:40.931 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.931 00:19:40.931 Latency(us) 00:19:40.931 [2024-10-11T09:56:43.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.931 [2024-10-11T09:56:43.634Z] =================================================================================================================== 00:19:40.931 [2024-10-11T09:56:43.634Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1945133 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1945473 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1945473 /var/tmp/bdevperf.sock 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1945473 ']' 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.931 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.931 [2024-10-11 11:56:43.531482] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:40.932 [2024-10-11 11:56:43.531542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1945473 ] 00:19:40.932 [2024-10-11 11:56:43.610753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.192 [2024-10-11 11:56:43.639354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.763 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.763 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:41.763 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:42.024 [2024-10-11 11:56:44.485529] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:42.024 [2024-10-11 11:56:44.485553] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:42.024 request: 00:19:42.024 { 00:19:42.024 "name": "key0", 00:19:42.024 "path": "", 00:19:42.024 "method": "keyring_file_add_key", 00:19:42.024 "req_id": 1 00:19:42.024 } 00:19:42.024 Got JSON-RPC error response 00:19:42.024 response: 00:19:42.024 { 00:19:42.024 "code": -1, 00:19:42.024 "message": "Operation not permitted" 00:19:42.024 } 00:19:42.024 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:42.024 [2024-10-11 11:56:44.650033] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.024 [2024-10-11 11:56:44.650056] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:42.024 request: 00:19:42.024 { 00:19:42.024 "name": "TLSTEST", 00:19:42.024 "trtype": "tcp", 00:19:42.024 "traddr": "10.0.0.2", 00:19:42.024 "adrfam": "ipv4", 00:19:42.024 "trsvcid": "4420", 00:19:42.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.024 "prchk_reftag": false, 00:19:42.024 "prchk_guard": false, 00:19:42.024 "hdgst": false, 00:19:42.024 "ddgst": false, 00:19:42.024 "psk": "key0", 00:19:42.024 "allow_unrecognized_csi": false, 00:19:42.024 "method": "bdev_nvme_attach_controller", 00:19:42.024 "req_id": 1 00:19:42.024 } 00:19:42.024 Got JSON-RPC error response 00:19:42.024 response: 00:19:42.024 { 00:19:42.024 "code": -126, 00:19:42.024 "message": "Required key not available" 00:19:42.024 } 00:19:42.024 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1945473 00:19:42.024 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1945473 ']' 00:19:42.024 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1945473 00:19:42.024 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.024 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.024 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1945473 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1945473' 00:19:42.285 killing process with pid 1945473 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1945473 00:19:42.285 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.285 00:19:42.285 Latency(us) 00:19:42.285 [2024-10-11T09:56:44.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.285 [2024-10-11T09:56:44.988Z] =================================================================================================================== 00:19:42.285 [2024-10-11T09:56:44.988Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1945473 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1939513 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1939513 ']' 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1939513 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1939513 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1939513' 00:19:42.285 killing process with pid 1939513 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1939513 00:19:42.285 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1939513 00:19:42.545 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:42.545 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:42.545 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:42.545 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:42.545 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:42.545 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:42.545 11:56:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Wl4L5Jh4qt 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Wl4L5Jh4qt 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1945827 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1945827 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1945827 ']' 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.545 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.545 [2024-10-11 11:56:45.109341] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:42.545 [2024-10-11 11:56:45.109400] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.545 [2024-10-11 11:56:45.197146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.545 [2024-10-11 11:56:45.229570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.545 [2024-10-11 11:56:45.229605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.545 [2024-10-11 11:56:45.229611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.545 [2024-10-11 11:56:45.229616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.545 [2024-10-11 11:56:45.229620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.545 [2024-10-11 11:56:45.230112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Wl4L5Jh4qt 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Wl4L5Jh4qt 00:19:43.487 11:56:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.487 [2024-10-11 11:56:46.095777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.487 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:43.747 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.007 [2024-10-11 11:56:46.460665] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.007 [2024-10-11 11:56:46.460852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.007 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.007 malloc0 00:19:44.007 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.267 11:56:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:19:44.527 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.527 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Wl4L5Jh4qt 00:19:44.527 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:44.527 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:44.527 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:44.527 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Wl4L5Jh4qt 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1946194 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1946194 /var/tmp/bdevperf.sock 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1946194 ']' 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.528 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.789 [2024-10-11 11:56:47.262614] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:44.789 [2024-10-11 11:56:47.262668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1946194 ] 00:19:44.789 [2024-10-11 11:56:47.340737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.789 [2024-10-11 11:56:47.369211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.789 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.789 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:44.789 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:19:45.050 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.311 [2024-10-11 11:56:47.782304] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.311 TLSTESTn1 00:19:45.311 11:56:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:45.311 Running I/O for 10 seconds... 00:19:47.636 4884.00 IOPS, 19.08 MiB/s [2024-10-11T09:56:51.279Z] 5432.50 IOPS, 21.22 MiB/s [2024-10-11T09:56:52.222Z] 5651.33 IOPS, 22.08 MiB/s [2024-10-11T09:56:53.166Z] 5866.00 IOPS, 22.91 MiB/s [2024-10-11T09:56:54.108Z] 5801.80 IOPS, 22.66 MiB/s [2024-10-11T09:56:55.051Z] 5855.00 IOPS, 22.87 MiB/s [2024-10-11T09:56:55.993Z] 5956.71 IOPS, 23.27 MiB/s [2024-10-11T09:56:57.376Z] 6023.12 IOPS, 23.53 MiB/s [2024-10-11T09:56:58.317Z] 5974.89 IOPS, 23.34 MiB/s [2024-10-11T09:56:58.317Z] 5955.20 IOPS, 23.26 MiB/s 00:19:55.614 Latency(us) 00:19:55.614 [2024-10-11T09:56:58.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.614 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:55.614 Verification LBA range: start 0x0 length 0x2000 00:19:55.614 TLSTESTn1 : 10.01 5959.78 23.28 0.00 0.00 21447.87 5106.35 31675.73 00:19:55.614 [2024-10-11T09:56:58.317Z] =================================================================================================================== 00:19:55.614 [2024-10-11T09:56:58.317Z] Total : 5959.78 23.28 0.00 0.00 21447.87 5106.35 31675.73 00:19:55.614 { 00:19:55.614 "results": [ 00:19:55.614 { 00:19:55.614 "job": "TLSTESTn1", 00:19:55.614 "core_mask": "0x4", 00:19:55.614 "workload": "verify", 00:19:55.614 "status": "finished", 00:19:55.614 "verify_range": { 00:19:55.614 "start": 0, 00:19:55.614 "length": 8192 00:19:55.614 }, 00:19:55.614 "queue_depth": 128, 00:19:55.614 "io_size": 4096, 00:19:55.614 "runtime": 10.013633, 00:19:55.614 "iops": 5959.775038689754, 00:19:55.614 "mibps": 23.280371244881852, 00:19:55.614 "io_failed": 0, 00:19:55.614 "io_timeout": 0, 00:19:55.614 "avg_latency_us": 21447.874662779202, 00:19:55.614 "min_latency_us": 5106.346666666666, 00:19:55.614 "max_latency_us": 31675.733333333334 00:19:55.614 } 00:19:55.614 ], 00:19:55.614 "core_count": 1 00:19:55.614 } 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1946194 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1946194 ']' 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1946194 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1946194 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1946194' 00:19:55.614 killing process with pid 1946194 00:19:55.614 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1946194 00:19:55.614 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.614 00:19:55.614 Latency(us) 00:19:55.614 [2024-10-11T09:56:58.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.614 [2024-10-11T09:56:58.317Z] =================================================================================================================== 00:19:55.614 [2024-10-11T09:56:58.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1946194 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Wl4L5Jh4qt 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Wl4L5Jh4qt 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Wl4L5Jh4qt 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Wl4L5Jh4qt 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Wl4L5Jh4qt 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1948365 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1948365 /var/tmp/bdevperf.sock 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1948365 ']' 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.615 11:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.615 [2024-10-11 11:56:58.246104] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:55.615 [2024-10-11 11:56:58.246158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948365 ] 00:19:55.876 [2024-10-11 11:56:58.324127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.876 [2024-10-11 11:56:58.353253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.447 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.447 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.447 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:19:56.709 [2024-10-11 11:56:59.199615] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Wl4L5Jh4qt': 0100666 00:19:56.709 [2024-10-11 11:56:59.199638] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:56.709 request: 00:19:56.709 { 00:19:56.709 "name": "key0", 00:19:56.709 "path": "/tmp/tmp.Wl4L5Jh4qt", 00:19:56.709 "method": "keyring_file_add_key", 00:19:56.709 "req_id": 1 00:19:56.709 } 00:19:56.709 Got JSON-RPC error response 00:19:56.709 response: 00:19:56.709 { 00:19:56.709 "code": -1, 00:19:56.709 "message": "Operation not permitted" 00:19:56.709 } 00:19:56.709 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.709 [2024-10-11 11:56:59.384155] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.709 [2024-10-11 11:56:59.384179] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:56.709 request: 00:19:56.709 { 00:19:56.709 "name": "TLSTEST", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.709 "prchk_reftag": false, 00:19:56.709 "prchk_guard": false, 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false, 00:19:56.709 "psk": "key0", 00:19:56.709 "allow_unrecognized_csi": false, 00:19:56.709 "method": "bdev_nvme_attach_controller", 00:19:56.709 "req_id": 1 00:19:56.709 } 00:19:56.709 Got JSON-RPC error response 00:19:56.709 response: 00:19:56.709 { 00:19:56.709 "code": -126, 00:19:56.709 "message": "Required key not available" 00:19:56.709 } 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1948365 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1948365 ']' 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1948365 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1948365 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1948365' 00:19:56.971 killing process with pid 1948365 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1948365 00:19:56.971 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.971 00:19:56.971 Latency(us) 00:19:56.971 [2024-10-11T09:56:59.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.971 [2024-10-11T09:56:59.674Z] =================================================================================================================== 00:19:56.971 [2024-10-11T09:56:59.674Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1948365 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1945827 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1945827 ']' 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1945827 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1945827 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1945827' 00:19:56.971 killing process with pid 1945827 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1945827 00:19:56.971 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1945827 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1948611 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1948611 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1948611 ']' 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.233 11:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.233 [2024-10-11 11:56:59.803339] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:57.233 [2024-10-11 11:56:59.803397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.233 [2024-10-11 11:56:59.886574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.233 [2024-10-11 11:56:59.916454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.233 [2024-10-11 11:56:59.916480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.233 [2024-10-11 11:56:59.916486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.233 [2024-10-11 11:56:59.916491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.233 [2024-10-11 11:56:59.916496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.233 [2024-10-11 11:56:59.916975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Wl4L5Jh4qt 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Wl4L5Jh4qt 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Wl4L5Jh4qt 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Wl4L5Jh4qt 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:58.176 [2024-10-11 11:57:00.784923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.176 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:58.438 11:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:58.699 [2024-10-11 11:57:01.149818] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.699 [2024-10-11 11:57:01.150016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.699 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:58.699 malloc0 00:19:58.699 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:58.960 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:19:59.220 [2024-10-11 11:57:01.688779] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Wl4L5Jh4qt': 0100666 00:19:59.220 [2024-10-11 11:57:01.688800] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:59.220 request: 00:19:59.220 { 00:19:59.220 "name": "key0", 00:19:59.220 "path": "/tmp/tmp.Wl4L5Jh4qt", 00:19:59.220 "method": "keyring_file_add_key", 00:19:59.220 "req_id": 1 00:19:59.220 } 00:19:59.220 Got JSON-RPC error response 00:19:59.220 response: 00:19:59.220 { 00:19:59.220 "code": -1, 00:19:59.220 "message": "Operation not permitted" 00:19:59.220 } 00:19:59.220 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.220 [2024-10-11 11:57:01.873260] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:59.220 [2024-10-11 11:57:01.873289] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:59.220 request: 00:19:59.220 { 00:19:59.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.220 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.220 "psk": "key0", 00:19:59.220 "method": "nvmf_subsystem_add_host", 00:19:59.220 "req_id": 1 00:19:59.220 } 00:19:59.220 Got JSON-RPC error response 00:19:59.220 response: 00:19:59.220 { 00:19:59.220 "code": -32603, 00:19:59.220 "message": "Internal error" 00:19:59.220 } 00:19:59.220 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:59.220 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:59.220 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:59.220 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:59.220 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1948611 00:19:59.220 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1948611 ']' 00:19:59.221 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1948611 00:19:59.221 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.221 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.221 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1948611 00:19:59.481 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:59.481 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:59.481 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1948611' 00:19:59.481 killing process with pid 1948611 00:19:59.481 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1948611 00:19:59.481 11:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1948611 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Wl4L5Jh4qt 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1949354 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1949354 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1949354 ']' 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.481 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.481 [2024-10-11 11:57:02.140055] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:59.481 [2024-10-11 11:57:02.140117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.742 [2024-10-11 11:57:02.222675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.742 [2024-10-11 11:57:02.252736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.742 [2024-10-11 11:57:02.252768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.742 [2024-10-11 11:57:02.252774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.742 [2024-10-11 11:57:02.252778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.742 [2024-10-11 11:57:02.252783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.742 [2024-10-11 11:57:02.253224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Wl4L5Jh4qt 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Wl4L5Jh4qt 00:20:00.312 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.572 [2024-10-11 11:57:03.137851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.572 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.831 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.831 [2024-10-11 11:57:03.494725] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.831 [2024-10-11 11:57:03.494919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.831 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.091 malloc0 00:20:01.091 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.351 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1949725 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1949725 /var/tmp/bdevperf.sock 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1949725 ']' 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.610 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.611 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.611 [2024-10-11 11:57:04.302966] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:01.611 [2024-10-11 11:57:04.303021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949725 ] 00:20:01.871 [2024-10-11 11:57:04.381744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.871 [2024-10-11 11:57:04.416763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.871 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.871 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.871 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:20:02.137 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.137 [2024-10-11 11:57:04.834850] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.423 TLSTESTn1 00:20:02.423 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:02.712 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:02.713 "subsystems": [ 00:20:02.713 { 00:20:02.713 "subsystem": "keyring", 00:20:02.713 "config": [ 00:20:02.713 { 00:20:02.713 "method": "keyring_file_add_key", 00:20:02.713 "params": { 00:20:02.713 "name": "key0", 00:20:02.713 "path": "/tmp/tmp.Wl4L5Jh4qt" 00:20:02.713 } 00:20:02.713 } 00:20:02.713 ] 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "subsystem": "iobuf", 00:20:02.713 "config": [ 00:20:02.713 { 00:20:02.713 "method": "iobuf_set_options", 00:20:02.713 "params": { 00:20:02.713 "small_pool_count": 8192, 00:20:02.713 "large_pool_count": 1024, 00:20:02.713 "small_bufsize": 8192, 00:20:02.713 "large_bufsize": 135168 00:20:02.713 } 00:20:02.713 } 00:20:02.713 ] 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "subsystem": "sock", 00:20:02.713 "config": [ 00:20:02.713 { 00:20:02.713 "method": "sock_set_default_impl", 00:20:02.713 "params": { 00:20:02.713 "impl_name": "posix" 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "sock_impl_set_options", 00:20:02.713 "params": { 00:20:02.713 "impl_name": "ssl", 00:20:02.713 "recv_buf_size": 4096, 00:20:02.713 "send_buf_size": 4096, 00:20:02.713 "enable_recv_pipe": true, 00:20:02.713 "enable_quickack": false, 00:20:02.713 "enable_placement_id": 0, 00:20:02.713 "enable_zerocopy_send_server": true, 00:20:02.713 "enable_zerocopy_send_client": false, 00:20:02.713 "zerocopy_threshold": 0, 00:20:02.713 "tls_version": 0, 00:20:02.713 "enable_ktls": false 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "sock_impl_set_options", 00:20:02.713 "params": { 00:20:02.713 "impl_name": "posix", 00:20:02.713 "recv_buf_size": 2097152, 00:20:02.713 "send_buf_size": 2097152, 00:20:02.713 "enable_recv_pipe": true, 00:20:02.713 "enable_quickack": false, 00:20:02.713 "enable_placement_id": 0, 00:20:02.713 "enable_zerocopy_send_server": true, 00:20:02.713 "enable_zerocopy_send_client": false, 00:20:02.713 "zerocopy_threshold": 0, 00:20:02.713 "tls_version": 0, 00:20:02.713 "enable_ktls": false 00:20:02.713 } 00:20:02.713 } 00:20:02.713 ] 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "subsystem": "vmd", 00:20:02.713 "config": [] 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "subsystem": "accel", 00:20:02.713 "config": [ 00:20:02.713 { 00:20:02.713 "method": "accel_set_options", 00:20:02.713 "params": { 00:20:02.713 "small_cache_size": 128, 00:20:02.713 "large_cache_size": 16, 00:20:02.713 "task_count": 2048, 00:20:02.713 "sequence_count": 2048, 00:20:02.713 "buf_count": 2048 00:20:02.713 } 00:20:02.713 } 00:20:02.713 ] 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "subsystem": "bdev", 00:20:02.713 "config": [ 00:20:02.713 { 00:20:02.713 "method": "bdev_set_options", 00:20:02.713 "params": { 00:20:02.713 "bdev_io_pool_size": 65535, 00:20:02.713 "bdev_io_cache_size": 256, 00:20:02.713 "bdev_auto_examine": true, 00:20:02.713 "iobuf_small_cache_size": 128, 00:20:02.713 "iobuf_large_cache_size": 16 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "bdev_raid_set_options", 00:20:02.713 "params": { 00:20:02.713 "process_window_size_kb": 1024, 00:20:02.713 "process_max_bandwidth_mb_sec": 0 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "bdev_iscsi_set_options", 00:20:02.713 "params": { 00:20:02.713 "timeout_sec": 30 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "bdev_nvme_set_options", 00:20:02.713 "params": { 00:20:02.713 "action_on_timeout": "none", 00:20:02.713 "timeout_us": 0, 00:20:02.713 "timeout_admin_us": 0, 00:20:02.713 "keep_alive_timeout_ms": 10000, 00:20:02.713 "arbitration_burst": 0, 00:20:02.713 "low_priority_weight": 0, 00:20:02.713 "medium_priority_weight": 0, 00:20:02.713 "high_priority_weight": 0, 00:20:02.713 "nvme_adminq_poll_period_us": 10000, 00:20:02.713 "nvme_ioq_poll_period_us": 0, 00:20:02.713 "io_queue_requests": 0, 00:20:02.713 "delay_cmd_submit": true, 00:20:02.713 "transport_retry_count": 4, 00:20:02.713 "bdev_retry_count": 3, 00:20:02.713 "transport_ack_timeout": 0, 00:20:02.713 "ctrlr_loss_timeout_sec": 0, 00:20:02.713 "reconnect_delay_sec": 0, 00:20:02.713 "fast_io_fail_timeout_sec": 0, 00:20:02.713 "disable_auto_failback": false, 00:20:02.713 "generate_uuids": false, 00:20:02.713 "transport_tos": 0, 00:20:02.713 "nvme_error_stat": false, 00:20:02.713 "rdma_srq_size": 0, 00:20:02.713 "io_path_stat": false, 00:20:02.713 "allow_accel_sequence": false, 00:20:02.713 "rdma_max_cq_size": 0, 00:20:02.713 "rdma_cm_event_timeout_ms": 0, 00:20:02.713 "dhchap_digests": [ 00:20:02.713 "sha256", 00:20:02.713 "sha384", 00:20:02.713 "sha512" 00:20:02.713 ], 00:20:02.713 "dhchap_dhgroups": [ 00:20:02.713 "null", 00:20:02.713 "ffdhe2048", 00:20:02.713 "ffdhe3072", 00:20:02.713 "ffdhe4096", 00:20:02.713 "ffdhe6144", 00:20:02.713 "ffdhe8192" 00:20:02.713 ] 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "bdev_nvme_set_hotplug", 00:20:02.713 "params": { 00:20:02.713 "period_us": 100000, 00:20:02.713 "enable": false 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "bdev_malloc_create", 00:20:02.713 "params": { 00:20:02.713 "name": "malloc0", 00:20:02.713 "num_blocks": 8192, 00:20:02.713 "block_size": 4096, 00:20:02.713 "physical_block_size": 4096, 00:20:02.713 "uuid": "4076e893-dab4-4e62-b9a7-cc99096e92cd", 00:20:02.713 "optimal_io_boundary": 0, 00:20:02.713 "md_size": 0, 00:20:02.713 "dif_type": 0, 00:20:02.713 "dif_is_head_of_md": false, 00:20:02.713 "dif_pi_format": 0 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "bdev_wait_for_examine" 00:20:02.713 } 00:20:02.713 ] 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "subsystem": "nbd", 00:20:02.713 "config": [] 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "subsystem": "scheduler", 00:20:02.713 "config": [ 00:20:02.713 { 00:20:02.713 "method": "framework_set_scheduler", 00:20:02.713 "params": { 00:20:02.713 "name": "static" 00:20:02.713 } 00:20:02.713 } 00:20:02.713 ] 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "subsystem": "nvmf", 00:20:02.713 "config": [ 00:20:02.713 { 00:20:02.713 "method": "nvmf_set_config", 00:20:02.713 "params": { 00:20:02.713 "discovery_filter": "match_any", 00:20:02.713 "admin_cmd_passthru": { 00:20:02.713 "identify_ctrlr": false 00:20:02.713 }, 00:20:02.713 "dhchap_digests": [ 00:20:02.713 "sha256", 00:20:02.713 "sha384", 00:20:02.713 "sha512" 00:20:02.713 ], 00:20:02.713 "dhchap_dhgroups": [ 00:20:02.713 "null", 00:20:02.713 "ffdhe2048", 00:20:02.713 "ffdhe3072", 00:20:02.713 "ffdhe4096", 00:20:02.713 "ffdhe6144", 00:20:02.713 "ffdhe8192" 00:20:02.713 ] 00:20:02.713 } 00:20:02.713 }, 00:20:02.713 { 00:20:02.713 "method": "nvmf_set_max_subsystems", 00:20:02.713 "params": { 00:20:02.714 "max_subsystems": 1024 00:20:02.714 } 00:20:02.714 }, 00:20:02.714 { 00:20:02.714 "method": "nvmf_set_crdt", 00:20:02.714 "params": { 00:20:02.714 "crdt1": 0, 00:20:02.714 "crdt2": 0, 00:20:02.714 "crdt3": 0 00:20:02.714 } 00:20:02.714 }, 00:20:02.714 { 00:20:02.714 "method": "nvmf_create_transport", 00:20:02.714 "params": { 00:20:02.714 "trtype": "TCP", 00:20:02.714 "max_queue_depth": 128, 00:20:02.714 "max_io_qpairs_per_ctrlr": 127, 00:20:02.714 "in_capsule_data_size": 4096, 00:20:02.714 "max_io_size": 131072, 00:20:02.714 "io_unit_size": 131072, 00:20:02.714 "max_aq_depth": 128, 00:20:02.714 "num_shared_buffers": 511, 00:20:02.714 "buf_cache_size": 4294967295, 00:20:02.714 "dif_insert_or_strip": false, 00:20:02.714 "zcopy": false, 00:20:02.714 "c2h_success": false, 00:20:02.714 "sock_priority": 0, 00:20:02.714 "abort_timeout_sec": 1, 00:20:02.714 "ack_timeout": 0, 00:20:02.714 "data_wr_pool_size": 0 00:20:02.714 } 00:20:02.714 }, 00:20:02.714 { 00:20:02.714 "method": "nvmf_create_subsystem", 00:20:02.714 "params": { 00:20:02.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.714 "allow_any_host": false, 00:20:02.714 "serial_number": "SPDK00000000000001", 00:20:02.714 "model_number": "SPDK bdev Controller", 00:20:02.714 "max_namespaces": 10, 00:20:02.714 "min_cntlid": 1, 00:20:02.714 "max_cntlid": 65519, 00:20:02.714 "ana_reporting": false 00:20:02.714 } 00:20:02.714 }, 00:20:02.714 { 00:20:02.714 "method": "nvmf_subsystem_add_host", 00:20:02.714 "params": { 00:20:02.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.714 "host": "nqn.2016-06.io.spdk:host1", 00:20:02.714 "psk": "key0" 00:20:02.714 } 00:20:02.714 }, 00:20:02.714 { 00:20:02.714 "method": "nvmf_subsystem_add_ns", 00:20:02.714 "params": { 00:20:02.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.714 "namespace": { 00:20:02.714 "nsid": 1, 00:20:02.714 "bdev_name": "malloc0", 00:20:02.714 "nguid": "4076E893DAB44E62B9A7CC99096E92CD", 00:20:02.714 "uuid": "4076e893-dab4-4e62-b9a7-cc99096e92cd", 00:20:02.714 "no_auto_visible": false 00:20:02.714 } 00:20:02.714 } 00:20:02.714 }, 00:20:02.714 { 00:20:02.714 "method": "nvmf_subsystem_add_listener", 00:20:02.714 "params": { 00:20:02.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.714 "listen_address": { 00:20:02.714 "trtype": "TCP", 00:20:02.714 "adrfam": "IPv4", 00:20:02.714 "traddr": "10.0.0.2", 00:20:02.714 "trsvcid": "4420" 00:20:02.714 }, 00:20:02.714 "secure_channel": true 00:20:02.714 } 00:20:02.714 } 00:20:02.714 ] 00:20:02.714 } 00:20:02.714 ] 00:20:02.714 }' 00:20:02.714 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:03.019 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:03.019 "subsystems": [ 00:20:03.019 { 00:20:03.019 "subsystem": "keyring", 00:20:03.019 "config": [ 00:20:03.019 { 00:20:03.019 "method": "keyring_file_add_key", 00:20:03.019 "params": { 00:20:03.019 "name": "key0", 00:20:03.019 "path": "/tmp/tmp.Wl4L5Jh4qt" 00:20:03.019 } 00:20:03.019 } 00:20:03.019 ] 00:20:03.019 }, 00:20:03.019 { 00:20:03.019 "subsystem": "iobuf", 00:20:03.019 "config": [ 00:20:03.019 { 00:20:03.019 "method": "iobuf_set_options", 00:20:03.019 "params": { 00:20:03.019 "small_pool_count": 8192, 00:20:03.019 "large_pool_count": 1024, 00:20:03.019 "small_bufsize": 8192, 00:20:03.019 "large_bufsize": 135168 00:20:03.019 } 00:20:03.019 } 00:20:03.019 ] 00:20:03.019 }, 00:20:03.019 { 00:20:03.019 "subsystem": "sock", 00:20:03.019 "config": [ 00:20:03.019 { 00:20:03.019 "method": "sock_set_default_impl", 00:20:03.019 "params": { 00:20:03.019 "impl_name": "posix" 00:20:03.019 } 00:20:03.019 }, 00:20:03.019 { 00:20:03.019 "method": "sock_impl_set_options", 00:20:03.019 "params": { 00:20:03.019 "impl_name": "ssl", 00:20:03.019 "recv_buf_size": 4096, 00:20:03.019 "send_buf_size": 4096, 00:20:03.019 "enable_recv_pipe": true, 00:20:03.019 "enable_quickack": false, 00:20:03.019 "enable_placement_id": 0, 00:20:03.019 "enable_zerocopy_send_server": true, 00:20:03.019 "enable_zerocopy_send_client": false, 00:20:03.019 "zerocopy_threshold": 0, 00:20:03.019 "tls_version": 0, 00:20:03.019 "enable_ktls": false 00:20:03.019 } 00:20:03.019 }, 00:20:03.019 { 00:20:03.019 "method": "sock_impl_set_options", 00:20:03.019 "params": { 00:20:03.019 "impl_name": "posix", 00:20:03.019 "recv_buf_size": 2097152, 00:20:03.019 "send_buf_size": 2097152, 00:20:03.019 "enable_recv_pipe": true, 00:20:03.019 "enable_quickack": false, 00:20:03.019 "enable_placement_id": 0, 00:20:03.019 "enable_zerocopy_send_server": true, 00:20:03.019 "enable_zerocopy_send_client": false, 00:20:03.019 "zerocopy_threshold": 0, 00:20:03.019 "tls_version": 0, 00:20:03.019 "enable_ktls": false 00:20:03.019 } 00:20:03.019 } 00:20:03.019 ] 00:20:03.019 }, 00:20:03.019 { 00:20:03.019 "subsystem": "vmd", 00:20:03.019 "config": [] 00:20:03.019 }, 00:20:03.019 { 00:20:03.019 "subsystem": "accel", 00:20:03.019 "config": [ 00:20:03.019 { 00:20:03.020 "method": "accel_set_options", 00:20:03.020 "params": { 00:20:03.020 "small_cache_size": 128, 00:20:03.020 "large_cache_size": 16, 00:20:03.020 "task_count": 2048, 00:20:03.020 "sequence_count": 2048, 00:20:03.020 "buf_count": 2048 00:20:03.020 } 00:20:03.020 } 00:20:03.020 ] 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "subsystem": "bdev", 00:20:03.020 "config": [ 00:20:03.020 { 00:20:03.020 "method": "bdev_set_options", 00:20:03.020 "params": { 00:20:03.020 "bdev_io_pool_size": 65535, 00:20:03.020 "bdev_io_cache_size": 256, 00:20:03.020 "bdev_auto_examine": true, 00:20:03.020 "iobuf_small_cache_size": 128, 00:20:03.020 "iobuf_large_cache_size": 16 00:20:03.020 } 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "method": "bdev_raid_set_options", 00:20:03.020 "params": { 00:20:03.020 "process_window_size_kb": 1024, 00:20:03.020 "process_max_bandwidth_mb_sec": 0 00:20:03.020 } 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "method": "bdev_iscsi_set_options", 00:20:03.020 "params": { 00:20:03.020 "timeout_sec": 30 00:20:03.020 } 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "method": "bdev_nvme_set_options", 00:20:03.020 "params": { 00:20:03.020 "action_on_timeout": "none", 00:20:03.020 "timeout_us": 0, 00:20:03.020 "timeout_admin_us": 0, 00:20:03.020 "keep_alive_timeout_ms": 10000, 00:20:03.020 "arbitration_burst": 0, 00:20:03.020 "low_priority_weight": 0, 00:20:03.020 "medium_priority_weight": 0, 00:20:03.020 "high_priority_weight": 0, 00:20:03.020 "nvme_adminq_poll_period_us": 10000, 00:20:03.020 "nvme_ioq_poll_period_us": 0, 00:20:03.020 "io_queue_requests": 512, 00:20:03.020 "delay_cmd_submit": true, 00:20:03.020 "transport_retry_count": 4, 00:20:03.020 "bdev_retry_count": 3, 00:20:03.020 "transport_ack_timeout": 0, 00:20:03.020 "ctrlr_loss_timeout_sec": 0, 00:20:03.020 "reconnect_delay_sec": 0, 00:20:03.020 "fast_io_fail_timeout_sec": 0, 00:20:03.020 "disable_auto_failback": false, 00:20:03.020 "generate_uuids": false, 00:20:03.020 "transport_tos": 0, 00:20:03.020 "nvme_error_stat": false, 00:20:03.020 "rdma_srq_size": 0, 00:20:03.020 "io_path_stat": false, 00:20:03.020 "allow_accel_sequence": false, 00:20:03.020 "rdma_max_cq_size": 0, 00:20:03.020 "rdma_cm_event_timeout_ms": 0, 00:20:03.020 "dhchap_digests": [ 00:20:03.020 "sha256", 00:20:03.020 "sha384", 00:20:03.020 "sha512" 00:20:03.020 ], 00:20:03.020 "dhchap_dhgroups": [ 00:20:03.020 "null", 00:20:03.020 "ffdhe2048", 00:20:03.020 "ffdhe3072", 00:20:03.020 "ffdhe4096", 00:20:03.020 "ffdhe6144", 00:20:03.020 "ffdhe8192" 00:20:03.020 ] 00:20:03.020 } 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "method": "bdev_nvme_attach_controller", 00:20:03.020 "params": { 00:20:03.020 "name": "TLSTEST", 00:20:03.020 "trtype": "TCP", 00:20:03.020 "adrfam": "IPv4", 00:20:03.020 "traddr": "10.0.0.2", 00:20:03.020 "trsvcid": "4420", 00:20:03.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.020 "prchk_reftag": false, 00:20:03.020 "prchk_guard": false, 00:20:03.020 "ctrlr_loss_timeout_sec": 0, 00:20:03.020 "reconnect_delay_sec": 0, 00:20:03.020 "fast_io_fail_timeout_sec": 0, 00:20:03.020 "psk": "key0", 00:20:03.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.020 "hdgst": false, 00:20:03.020 "ddgst": false, 00:20:03.020 "multipath": "multipath" 00:20:03.020 } 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "method": "bdev_nvme_set_hotplug", 00:20:03.020 "params": { 00:20:03.020 "period_us": 100000, 00:20:03.020 "enable": false 00:20:03.020 } 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "method": "bdev_wait_for_examine" 00:20:03.020 } 00:20:03.020 ] 00:20:03.020 }, 00:20:03.020 { 00:20:03.020 "subsystem": "nbd", 00:20:03.020 "config": [] 00:20:03.020 } 00:20:03.020 ] 00:20:03.020 }' 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1949725 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1949725 ']' 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1949725 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1949725 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1949725' 00:20:03.020 killing process with pid 1949725 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1949725 00:20:03.020 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.020 00:20:03.020 Latency(us) 00:20:03.020 [2024-10-11T09:57:05.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.020 [2024-10-11T09:57:05.723Z] =================================================================================================================== 00:20:03.020 [2024-10-11T09:57:05.723Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1949725 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1949354 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1949354 ']' 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1949354 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1949354 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1949354' 00:20:03.020 killing process with pid 1949354 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1949354 00:20:03.020 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1949354 00:20:03.282 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:03.282 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:03.282 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.282 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.282 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:03.282 "subsystems": [ 00:20:03.282 { 00:20:03.282 "subsystem": "keyring", 00:20:03.282 "config": [ 00:20:03.282 { 00:20:03.282 "method": "keyring_file_add_key", 00:20:03.282 "params": { 00:20:03.282 "name": "key0", 00:20:03.282 "path": "/tmp/tmp.Wl4L5Jh4qt" 00:20:03.282 } 00:20:03.282 } 00:20:03.282 ] 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "subsystem": "iobuf", 00:20:03.282 "config": [ 00:20:03.282 { 00:20:03.282 "method": "iobuf_set_options", 00:20:03.282 "params": { 00:20:03.282 "small_pool_count": 8192, 00:20:03.282 "large_pool_count": 1024, 00:20:03.282 "small_bufsize": 8192, 00:20:03.282 "large_bufsize": 135168 00:20:03.282 } 00:20:03.282 } 00:20:03.282 ] 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "subsystem": "sock", 00:20:03.282 "config": [ 00:20:03.282 { 00:20:03.282 "method": "sock_set_default_impl", 00:20:03.282 "params": { 00:20:03.282 "impl_name": "posix" 00:20:03.282 } 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "method": "sock_impl_set_options", 00:20:03.282 "params": { 00:20:03.282 "impl_name": "ssl", 00:20:03.282 "recv_buf_size": 4096, 00:20:03.282 "send_buf_size": 4096, 00:20:03.282 "enable_recv_pipe": true, 00:20:03.282 "enable_quickack": false, 00:20:03.282 "enable_placement_id": 0, 00:20:03.282 "enable_zerocopy_send_server": true, 00:20:03.282 "enable_zerocopy_send_client": false, 00:20:03.282 "zerocopy_threshold": 0, 00:20:03.282 "tls_version": 0, 00:20:03.282 "enable_ktls": false 00:20:03.282 } 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "method": "sock_impl_set_options", 00:20:03.282 "params": { 00:20:03.282 "impl_name": "posix", 00:20:03.282 "recv_buf_size": 2097152, 00:20:03.282 "send_buf_size": 2097152, 00:20:03.282 "enable_recv_pipe": true, 00:20:03.282 "enable_quickack": false, 00:20:03.282 "enable_placement_id": 0, 00:20:03.282 "enable_zerocopy_send_server": true, 00:20:03.282 "enable_zerocopy_send_client": false, 00:20:03.282 "zerocopy_threshold": 0, 00:20:03.282 "tls_version": 0, 00:20:03.282 "enable_ktls": false 00:20:03.282 } 00:20:03.282 } 00:20:03.282 ] 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "subsystem": "vmd", 00:20:03.282 "config": [] 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "subsystem": "accel", 00:20:03.282 "config": [ 00:20:03.282 { 00:20:03.282 "method": "accel_set_options", 00:20:03.282 "params": { 00:20:03.282 "small_cache_size": 128, 00:20:03.282 "large_cache_size": 16, 00:20:03.282 "task_count": 2048, 00:20:03.282 "sequence_count": 2048, 00:20:03.282 "buf_count": 2048 00:20:03.282 } 00:20:03.282 } 00:20:03.282 ] 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "subsystem": "bdev", 00:20:03.282 "config": [ 00:20:03.282 { 00:20:03.282 "method": "bdev_set_options", 00:20:03.282 "params": { 00:20:03.282 "bdev_io_pool_size": 65535, 00:20:03.282 "bdev_io_cache_size": 256, 00:20:03.282 "bdev_auto_examine": true, 00:20:03.282 "iobuf_small_cache_size": 128, 00:20:03.282 "iobuf_large_cache_size": 16 00:20:03.282 } 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "method": "bdev_raid_set_options", 00:20:03.282 "params": { 00:20:03.282 "process_window_size_kb": 1024, 00:20:03.282 "process_max_bandwidth_mb_sec": 0 00:20:03.282 } 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "method": "bdev_iscsi_set_options", 00:20:03.282 "params": { 00:20:03.282 "timeout_sec": 30 00:20:03.282 } 00:20:03.282 }, 00:20:03.282 { 00:20:03.282 "method": "bdev_nvme_set_options", 00:20:03.282 "params": { 00:20:03.282 "action_on_timeout": "none", 00:20:03.282 "timeout_us": 0, 00:20:03.282 "timeout_admin_us": 0, 00:20:03.282 "keep_alive_timeout_ms": 10000, 00:20:03.282 "arbitration_burst": 0, 00:20:03.282 "low_priority_weight": 0, 00:20:03.282 "medium_priority_weight": 0, 00:20:03.282 "high_priority_weight": 0, 00:20:03.282 "nvme_adminq_poll_period_us": 10000, 00:20:03.282 "nvme_ioq_poll_period_us": 0, 00:20:03.282 "io_queue_requests": 0, 00:20:03.282 "delay_cmd_submit": true, 00:20:03.282 "transport_retry_count": 4, 00:20:03.282 "bdev_retry_count": 3, 00:20:03.282 "transport_ack_timeout": 0, 00:20:03.282 "ctrlr_loss_timeout_sec": 0, 00:20:03.282 "reconnect_delay_sec": 0, 00:20:03.282 "fast_io_fail_timeout_sec": 0, 00:20:03.282 "disable_auto_failback": false, 00:20:03.282 "generate_uuids": false, 00:20:03.282 "transport_tos": 0, 00:20:03.282 "nvme_error_stat": false, 00:20:03.282 "rdma_srq_size": 0, 00:20:03.283 "io_path_stat": false, 00:20:03.283 "allow_accel_sequence": false, 00:20:03.283 "rdma_max_cq_size": 0, 00:20:03.283 "rdma_cm_event_timeout_ms": 0, 00:20:03.283 "dhchap_digests": [ 00:20:03.283 "sha256", 00:20:03.283 "sha384", 00:20:03.283 "sha512" 00:20:03.283 ], 00:20:03.283 "dhchap_dhgroups": [ 00:20:03.283 "null", 00:20:03.283 "ffdhe2048", 00:20:03.283 "ffdhe3072", 00:20:03.283 "ffdhe4096", 00:20:03.283 "ffdhe6144", 00:20:03.283 "ffdhe8192" 00:20:03.283 ] 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "bdev_nvme_set_hotplug", 00:20:03.283 "params": { 00:20:03.283 "period_us": 100000, 00:20:03.283 "enable": false 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "bdev_malloc_create", 00:20:03.283 "params": { 00:20:03.283 "name": "malloc0", 00:20:03.283 "num_blocks": 8192, 00:20:03.283 "block_size": 4096, 00:20:03.283 "physical_block_size": 4096, 00:20:03.283 "uuid": "4076e893-dab4-4e62-b9a7-cc99096e92cd", 00:20:03.283 "optimal_io_boundary": 0, 00:20:03.283 "md_size": 0, 00:20:03.283 "dif_type": 0, 00:20:03.283 "dif_is_head_of_md": false, 00:20:03.283 "dif_pi_format": 0 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "bdev_wait_for_examine" 00:20:03.283 } 00:20:03.283 ] 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "subsystem": "nbd", 00:20:03.283 "config": [] 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "subsystem": "scheduler", 00:20:03.283 "config": [ 00:20:03.283 { 00:20:03.283 "method": "framework_set_scheduler", 00:20:03.283 "params": { 00:20:03.283 "name": "static" 00:20:03.283 } 00:20:03.283 } 00:20:03.283 ] 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "subsystem": "nvmf", 00:20:03.283 "config": [ 00:20:03.283 { 00:20:03.283 "method": "nvmf_set_config", 00:20:03.283 "params": { 00:20:03.283 "discovery_filter": "match_any", 00:20:03.283 "admin_cmd_passthru": { 00:20:03.283 "identify_ctrlr": false 00:20:03.283 }, 00:20:03.283 "dhchap_digests": [ 00:20:03.283 "sha256", 00:20:03.283 "sha384", 00:20:03.283 "sha512" 00:20:03.283 ], 00:20:03.283 "dhchap_dhgroups": [ 00:20:03.283 "null", 00:20:03.283 "ffdhe2048", 00:20:03.283 "ffdhe3072", 00:20:03.283 "ffdhe4096", 00:20:03.283 "ffdhe6144", 00:20:03.283 "ffdhe8192" 00:20:03.283 ] 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "nvmf_set_max_subsystems", 00:20:03.283 "params": { 00:20:03.283 "max_subsystems": 1024 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "nvmf_set_crdt", 00:20:03.283 "params": { 00:20:03.283 "crdt1": 0, 00:20:03.283 "crdt2": 0, 00:20:03.283 "crdt3": 0 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "nvmf_create_transport", 00:20:03.283 "params": { 00:20:03.283 "trtype": "TCP", 00:20:03.283 "max_queue_depth": 128, 00:20:03.283 "max_io_qpairs_per_ctrlr": 127, 00:20:03.283 "in_capsule_data_size": 4096, 00:20:03.283 "max_io_size": 131072, 00:20:03.283 "io_unit_size": 131072, 00:20:03.283 "max_aq_depth": 128, 00:20:03.283 "num_shared_buffers": 511, 00:20:03.283 "buf_cache_size": 4294967295, 00:20:03.283 "dif_insert_or_strip": false, 00:20:03.283 "zcopy": false, 00:20:03.283 "c2h_success": false, 00:20:03.283 "sock_priority": 0, 00:20:03.283 "abort_timeout_sec": 1, 00:20:03.283 "ack_timeout": 0, 00:20:03.283 "data_wr_pool_size": 0 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "nvmf_create_subsystem", 00:20:03.283 "params": { 00:20:03.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.283 "allow_any_host": false, 00:20:03.283 "serial_number": "SPDK00000000000001", 00:20:03.283 "model_number": "SPDK bdev Controller", 00:20:03.283 "max_namespaces": 10, 00:20:03.283 "min_cntlid": 1, 00:20:03.283 "max_cntlid": 65519, 00:20:03.283 "ana_reporting": false 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "nvmf_subsystem_add_host", 00:20:03.283 "params": { 00:20:03.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.283 "host": "nqn.2016-06.io.spdk:host1", 00:20:03.283 "psk": "key0" 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "nvmf_subsystem_add_ns", 00:20:03.283 "params": { 00:20:03.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.283 "namespace": { 00:20:03.283 "nsid": 1, 00:20:03.283 "bdev_name": "malloc0", 00:20:03.283 "nguid": "4076E893DAB44E62B9A7CC99096E92CD", 00:20:03.283 "uuid": "4076e893-dab4-4e62-b9a7-cc99096e92cd", 00:20:03.283 "no_auto_visible": false 00:20:03.283 } 00:20:03.283 } 00:20:03.283 }, 00:20:03.283 { 00:20:03.283 "method": "nvmf_subsystem_add_listener", 00:20:03.283 "params": { 00:20:03.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.283 "listen_address": { 00:20:03.283 "trtype": "TCP", 00:20:03.283 "adrfam": "IPv4", 00:20:03.283 "traddr": "10.0.0.2", 00:20:03.283 "trsvcid": "4420" 00:20:03.283 }, 00:20:03.283 "secure_channel": true 00:20:03.283 } 00:20:03.283 } 00:20:03.283 ] 00:20:03.283 } 00:20:03.283 ] 00:20:03.283 }' 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1950083 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1950083 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1950083 ']' 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.283 11:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.283 [2024-10-11 11:57:05.868428] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:03.283 [2024-10-11 11:57:05.868488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.283 [2024-10-11 11:57:05.951309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.283 [2024-10-11 11:57:05.981708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.283 [2024-10-11 11:57:05.981734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.283 [2024-10-11 11:57:05.981740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.283 [2024-10-11 11:57:05.981745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.283 [2024-10-11 11:57:05.981749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.283 [2024-10-11 11:57:05.982250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.544 [2024-10-11 11:57:06.174883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.544 [2024-10-11 11:57:06.206907] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.544 [2024-10-11 11:57:06.207108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1950185 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1950185 /var/tmp/bdevperf.sock 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1950185 ']' 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.115 11:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:04.115 "subsystems": [ 00:20:04.115 { 00:20:04.115 "subsystem": "keyring", 00:20:04.115 "config": [ 00:20:04.115 { 00:20:04.115 "method": "keyring_file_add_key", 00:20:04.115 "params": { 00:20:04.115 "name": "key0", 00:20:04.115 "path": "/tmp/tmp.Wl4L5Jh4qt" 00:20:04.115 } 00:20:04.115 } 00:20:04.115 ] 00:20:04.115 }, 00:20:04.115 { 00:20:04.115 "subsystem": "iobuf", 00:20:04.115 "config": [ 00:20:04.115 { 00:20:04.115 "method": "iobuf_set_options", 00:20:04.115 "params": { 00:20:04.115 "small_pool_count": 8192, 00:20:04.115 "large_pool_count": 1024, 00:20:04.115 "small_bufsize": 8192, 00:20:04.115 "large_bufsize": 135168 00:20:04.115 } 00:20:04.115 } 00:20:04.115 ] 00:20:04.115 }, 00:20:04.115 { 00:20:04.115 "subsystem": "sock", 00:20:04.115 "config": [ 00:20:04.115 { 00:20:04.115 "method": "sock_set_default_impl", 00:20:04.115 "params": { 00:20:04.115 "impl_name": "posix" 00:20:04.115 } 00:20:04.115 }, 00:20:04.115 { 00:20:04.115 "method": "sock_impl_set_options", 00:20:04.115 "params": { 00:20:04.115 "impl_name": "ssl", 00:20:04.115 "recv_buf_size": 4096, 00:20:04.115 "send_buf_size": 4096, 00:20:04.115 "enable_recv_pipe": true, 00:20:04.115 "enable_quickack": false, 00:20:04.115 "enable_placement_id": 0, 00:20:04.115 "enable_zerocopy_send_server": true, 00:20:04.115 "enable_zerocopy_send_client": false, 00:20:04.115 "zerocopy_threshold": 0, 00:20:04.115 "tls_version": 0, 00:20:04.115 "enable_ktls": false 00:20:04.115 } 00:20:04.115 }, 00:20:04.115 { 00:20:04.115 "method": "sock_impl_set_options", 00:20:04.115 "params": { 00:20:04.115 "impl_name": "posix", 00:20:04.115 "recv_buf_size": 2097152, 00:20:04.115 "send_buf_size": 2097152, 00:20:04.115 "enable_recv_pipe": true, 00:20:04.115 "enable_quickack": false, 00:20:04.115 "enable_placement_id": 0, 00:20:04.115 "enable_zerocopy_send_server": true, 00:20:04.115 "enable_zerocopy_send_client": false, 00:20:04.115 "zerocopy_threshold": 0, 00:20:04.115 "tls_version": 0, 00:20:04.115 "enable_ktls": false 00:20:04.115 } 00:20:04.116 } 00:20:04.116 ] 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "subsystem": "vmd", 00:20:04.116 "config": [] 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "subsystem": "accel", 00:20:04.116 "config": [ 00:20:04.116 { 00:20:04.116 "method": "accel_set_options", 00:20:04.116 "params": { 00:20:04.116 "small_cache_size": 128, 00:20:04.116 "large_cache_size": 16, 00:20:04.116 "task_count": 2048, 00:20:04.116 "sequence_count": 2048, 00:20:04.116 "buf_count": 2048 00:20:04.116 } 00:20:04.116 } 00:20:04.116 ] 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "subsystem": "bdev", 00:20:04.116 "config": [ 00:20:04.116 { 00:20:04.116 "method": "bdev_set_options", 00:20:04.116 "params": { 00:20:04.116 "bdev_io_pool_size": 65535, 00:20:04.116 "bdev_io_cache_size": 256, 00:20:04.116 "bdev_auto_examine": true, 00:20:04.116 "iobuf_small_cache_size": 128, 00:20:04.116 "iobuf_large_cache_size": 16 00:20:04.116 } 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "method": "bdev_raid_set_options", 00:20:04.116 "params": { 00:20:04.116 "process_window_size_kb": 1024, 00:20:04.116 "process_max_bandwidth_mb_sec": 0 00:20:04.116 } 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "method": "bdev_iscsi_set_options", 00:20:04.116 "params": { 00:20:04.116 "timeout_sec": 30 00:20:04.116 } 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "method": "bdev_nvme_set_options", 00:20:04.116 "params": { 00:20:04.116 "action_on_timeout": "none", 00:20:04.116 "timeout_us": 0, 00:20:04.116 "timeout_admin_us": 0, 00:20:04.116 "keep_alive_timeout_ms": 10000, 00:20:04.116 "arbitration_burst": 0, 00:20:04.116 "low_priority_weight": 0, 00:20:04.116 "medium_priority_weight": 0, 00:20:04.116 "high_priority_weight": 0, 00:20:04.116 "nvme_adminq_poll_period_us": 10000, 00:20:04.116 "nvme_ioq_poll_period_us": 0, 00:20:04.116 "io_queue_requests": 512, 00:20:04.116 "delay_cmd_submit": true, 00:20:04.116 "transport_retry_count": 4, 00:20:04.116 "bdev_retry_count": 3, 00:20:04.116 "transport_ack_timeout": 0, 00:20:04.116 "ctrlr_loss_timeout_sec": 0, 00:20:04.116 "reconnect_delay_sec": 0, 00:20:04.116 "fast_io_fail_timeout_sec": 0, 00:20:04.116 "disable_auto_failback": false, 00:20:04.116 "generate_uuids": false, 00:20:04.116 "transport_tos": 0, 00:20:04.116 "nvme_error_stat": false, 00:20:04.116 "rdma_srq_size": 0, 00:20:04.116 "io_path_stat": false, 00:20:04.116 "allow_accel_sequence": false, 00:20:04.116 "rdma_max_cq_size": 0, 00:20:04.116 "rdma_cm_event_timeout_ms": 0, 00:20:04.116 "dhchap_digests": [ 00:20:04.116 "sha256", 00:20:04.116 "sha384", 00:20:04.116 "sha512" 00:20:04.116 ], 00:20:04.116 "dhchap_dhgroups": [ 00:20:04.116 "null", 00:20:04.116 "ffdhe2048", 00:20:04.116 "ffdhe3072", 00:20:04.116 "ffdhe4096", 00:20:04.116 "ffdhe6144", 00:20:04.116 "ffdhe8192" 00:20:04.116 ] 00:20:04.116 } 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "method": "bdev_nvme_attach_controller", 00:20:04.116 "params": { 00:20:04.116 "name": "TLSTEST", 00:20:04.116 "trtype": "TCP", 00:20:04.116 "adrfam": "IPv4", 00:20:04.116 "traddr": "10.0.0.2", 00:20:04.116 "trsvcid": "4420", 00:20:04.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.116 "prchk_reftag": false, 00:20:04.116 "prchk_guard": false, 00:20:04.116 "ctrlr_loss_timeout_sec": 0, 00:20:04.116 "reconnect_delay_sec": 0, 00:20:04.116 "fast_io_fail_timeout_sec": 0, 00:20:04.116 "psk": "key0", 00:20:04.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.116 "hdgst": false, 00:20:04.116 "ddgst": false, 00:20:04.116 "multipath": "multipath" 00:20:04.116 } 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "method": "bdev_nvme_set_hotplug", 00:20:04.116 "params": { 00:20:04.116 "period_us": 100000, 00:20:04.116 "enable": false 00:20:04.116 } 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "method": "bdev_wait_for_examine" 00:20:04.116 } 00:20:04.116 ] 00:20:04.116 }, 00:20:04.116 { 00:20:04.116 "subsystem": "nbd", 00:20:04.116 "config": [] 00:20:04.116 } 00:20:04.116 ] 00:20:04.116 }' 00:20:04.116 [2024-10-11 11:57:06.731128] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:04.116 [2024-10-11 11:57:06.731179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950185 ] 00:20:04.116 [2024-10-11 11:57:06.808123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.378 [2024-10-11 11:57:06.843514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.378 [2024-10-11 11:57:06.983084] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.947 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.947 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:04.947 11:57:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:04.947 Running I/O for 10 seconds... 00:20:07.272 4091.00 IOPS, 15.98 MiB/s [2024-10-11T09:57:10.917Z] 4772.00 IOPS, 18.64 MiB/s [2024-10-11T09:57:11.857Z] 5174.00 IOPS, 20.21 MiB/s [2024-10-11T09:57:12.799Z] 5471.25 IOPS, 21.37 MiB/s [2024-10-11T09:57:13.739Z] 5672.20 IOPS, 22.16 MiB/s [2024-10-11T09:57:14.681Z] 5785.83 IOPS, 22.60 MiB/s [2024-10-11T09:57:15.621Z] 5807.14 IOPS, 22.68 MiB/s [2024-10-11T09:57:17.003Z] 5776.12 IOPS, 22.56 MiB/s [2024-10-11T09:57:17.944Z] 5790.11 IOPS, 22.62 MiB/s [2024-10-11T09:57:17.944Z] 5835.60 IOPS, 22.80 MiB/s 00:20:15.241 Latency(us) 00:20:15.241 [2024-10-11T09:57:17.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.241 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.241 Verification LBA range: start 0x0 length 0x2000 00:20:15.241 TLSTESTn1 : 10.01 5840.64 22.82 0.00 0.00 21880.10 5652.48 37355.52 00:20:15.241 [2024-10-11T09:57:17.944Z] =================================================================================================================== 00:20:15.241 [2024-10-11T09:57:17.944Z] Total : 5840.64 22.82 0.00 0.00 21880.10 5652.48 37355.52 00:20:15.241 { 00:20:15.241 "results": [ 00:20:15.241 { 00:20:15.241 "job": "TLSTESTn1", 00:20:15.241 "core_mask": "0x4", 00:20:15.241 "workload": "verify", 00:20:15.241 "status": "finished", 00:20:15.241 "verify_range": { 00:20:15.241 "start": 0, 00:20:15.241 "length": 8192 00:20:15.241 }, 00:20:15.241 "queue_depth": 128, 00:20:15.241 "io_size": 4096, 00:20:15.241 "runtime": 10.013114, 00:20:15.241 "iops": 5840.640583938223, 00:20:15.241 "mibps": 22.815002281008685, 00:20:15.241 "io_failed": 0, 00:20:15.241 "io_timeout": 0, 00:20:15.241 "avg_latency_us": 21880.10389070328, 00:20:15.241 "min_latency_us": 5652.48, 00:20:15.241 "max_latency_us": 37355.52 00:20:15.241 } 00:20:15.241 ], 00:20:15.241 "core_count": 1 00:20:15.241 } 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1950185 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1950185 ']' 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1950185 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1950185 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1950185' 00:20:15.241 killing process with pid 1950185 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1950185 00:20:15.241 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.241 00:20:15.241 Latency(us) 00:20:15.241 [2024-10-11T09:57:17.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.241 [2024-10-11T09:57:17.944Z] =================================================================================================================== 00:20:15.241 [2024-10-11T09:57:17.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1950185 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1950083 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1950083 ']' 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1950083 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1950083 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1950083' 00:20:15.241 killing process with pid 1950083 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1950083 00:20:15.241 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1950083 00:20:15.502 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:15.502 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:15.502 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.502 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1952901 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1952901 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1952901 ']' 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.502 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.502 [2024-10-11 11:57:18.064389] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:15.502 [2024-10-11 11:57:18.064448] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.503 [2024-10-11 11:57:18.149304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.503 [2024-10-11 11:57:18.198696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.503 [2024-10-11 11:57:18.198750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.503 [2024-10-11 11:57:18.198758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.503 [2024-10-11 11:57:18.198766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.503 [2024-10-11 11:57:18.198772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.503 [2024-10-11 11:57:18.199573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Wl4L5Jh4qt 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Wl4L5Jh4qt 00:20:16.445 11:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:16.445 [2024-10-11 11:57:19.066456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.445 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:16.706 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:16.966 [2024-10-11 11:57:19.419335] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.966 [2024-10-11 11:57:19.419651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.966 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:16.966 malloc0 00:20:16.966 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:17.227 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:20:17.488 11:57:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1953268 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1953268 /var/tmp/bdevperf.sock 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1953268 ']' 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.488 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.749 [2024-10-11 11:57:20.219009] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:17.749 [2024-10-11 11:57:20.219086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953268 ] 00:20:17.749 [2024-10-11 11:57:20.298647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.749 [2024-10-11 11:57:20.333517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.321 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:18.321 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:18.321 11:57:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:20:18.581 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:18.842 [2024-10-11 11:57:21.299827] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.842 nvme0n1 00:20:18.842 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.842 Running I/O for 1 seconds... 00:20:20.044 5174.00 IOPS, 20.21 MiB/s 00:20:20.044 Latency(us) 00:20:20.044 [2024-10-11T09:57:22.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.044 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:20.044 Verification LBA range: start 0x0 length 0x2000 00:20:20.044 nvme0n1 : 1.02 5215.88 20.37 0.00 0.00 24337.48 5597.87 31238.83 00:20:20.044 [2024-10-11T09:57:22.747Z] =================================================================================================================== 00:20:20.044 [2024-10-11T09:57:22.747Z] Total : 5215.88 20.37 0.00 0.00 24337.48 5597.87 31238.83 00:20:20.044 { 00:20:20.044 "results": [ 00:20:20.044 { 00:20:20.044 "job": "nvme0n1", 00:20:20.044 "core_mask": "0x2", 00:20:20.044 "workload": "verify", 00:20:20.044 "status": "finished", 00:20:20.044 "verify_range": { 00:20:20.044 "start": 0, 00:20:20.044 "length": 8192 00:20:20.044 }, 00:20:20.044 "queue_depth": 128, 00:20:20.044 "io_size": 4096, 00:20:20.044 "runtime": 1.016702, 00:20:20.044 "iops": 5215.884300414477, 00:20:20.044 "mibps": 20.374548048494052, 00:20:20.044 "io_failed": 0, 00:20:20.044 "io_timeout": 0, 00:20:20.044 "avg_latency_us": 24337.479896913694, 00:20:20.044 "min_latency_us": 5597.866666666667, 00:20:20.044 "max_latency_us": 31238.826666666668 00:20:20.044 } 00:20:20.044 ], 00:20:20.044 "core_count": 1 00:20:20.044 } 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1953268 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1953268 ']' 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1953268 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1953268 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1953268' 00:20:20.044 killing process with pid 1953268 00:20:20.044 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1953268 00:20:20.045 Received shutdown signal, test time was about 1.000000 seconds 00:20:20.045 00:20:20.045 Latency(us) 00:20:20.045 [2024-10-11T09:57:22.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.045 [2024-10-11T09:57:22.748Z] =================================================================================================================== 00:20:20.045 [2024-10-11T09:57:22.748Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.045 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1953268 00:20:20.045 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1952901 00:20:20.045 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1952901 ']' 00:20:20.045 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1952901 00:20:20.045 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:20.045 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.045 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1952901 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1952901' 00:20:20.306 killing process with pid 1952901 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1952901 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1952901 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1953943 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1953943 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1953943 ']' 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.306 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.306 [2024-10-11 11:57:22.952303] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:20.306 [2024-10-11 11:57:22.952361] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.566 [2024-10-11 11:57:23.036758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.566 [2024-10-11 11:57:23.087258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.566 [2024-10-11 11:57:23.087309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.566 [2024-10-11 11:57:23.087317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.566 [2024-10-11 11:57:23.087324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.566 [2024-10-11 11:57:23.087331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.566 [2024-10-11 11:57:23.088108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.137 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.137 [2024-10-11 11:57:23.805950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.137 malloc0 00:20:21.137 [2024-10-11 11:57:23.836002] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.137 [2024-10-11 11:57:23.836337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.397 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.397 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1953983 00:20:21.397 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1953983 /var/tmp/bdevperf.sock 00:20:21.398 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:21.398 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1953983 ']' 00:20:21.398 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.398 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.398 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.398 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.398 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.398 [2024-10-11 11:57:23.916787] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:21.398 [2024-10-11 11:57:23.916850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953983 ] 00:20:21.398 [2024-10-11 11:57:23.996626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.398 [2024-10-11 11:57:24.031591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.339 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.339 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:22.339 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Wl4L5Jh4qt 00:20:22.339 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:22.339 [2024-10-11 11:57:25.025893] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.600 nvme0n1 00:20:22.600 11:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.600 Running I/O for 1 seconds... 00:20:23.802 6092.00 IOPS, 23.80 MiB/s 00:20:23.802 Latency(us) 00:20:23.802 [2024-10-11T09:57:26.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.802 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:23.802 Verification LBA range: start 0x0 length 0x2000 00:20:23.802 nvme0n1 : 1.05 5929.32 23.16 0.00 0.00 21149.51 5734.40 47185.92 00:20:23.802 [2024-10-11T09:57:26.505Z] =================================================================================================================== 00:20:23.802 [2024-10-11T09:57:26.505Z] Total : 5929.32 23.16 0.00 0.00 21149.51 5734.40 47185.92 00:20:23.802 { 00:20:23.802 "results": [ 00:20:23.802 { 00:20:23.802 "job": "nvme0n1", 00:20:23.802 "core_mask": "0x2", 00:20:23.802 "workload": "verify", 00:20:23.802 "status": "finished", 00:20:23.802 "verify_range": { 00:20:23.802 "start": 0, 00:20:23.802 "length": 8192 00:20:23.802 }, 00:20:23.802 "queue_depth": 128, 00:20:23.802 "io_size": 4096, 00:20:23.802 "runtime": 1.049192, 00:20:23.802 "iops": 5929.324661263144, 00:20:23.802 "mibps": 23.161424458059155, 00:20:23.802 "io_failed": 0, 00:20:23.802 "io_timeout": 0, 00:20:23.802 "avg_latency_us": 21149.51372876815, 00:20:23.802 "min_latency_us": 5734.4, 00:20:23.802 "max_latency_us": 47185.92 00:20:23.802 } 00:20:23.802 ], 00:20:23.802 "core_count": 1 00:20:23.802 } 00:20:23.802 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:23.802 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.802 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.802 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.802 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:23.802 "subsystems": [ 00:20:23.802 { 00:20:23.802 "subsystem": "keyring", 00:20:23.802 "config": [ 00:20:23.802 { 00:20:23.802 "method": "keyring_file_add_key", 00:20:23.802 "params": { 00:20:23.802 "name": "key0", 00:20:23.802 "path": "/tmp/tmp.Wl4L5Jh4qt" 00:20:23.802 } 00:20:23.802 } 00:20:23.802 ] 00:20:23.802 }, 00:20:23.802 { 00:20:23.802 "subsystem": "iobuf", 00:20:23.802 "config": [ 00:20:23.802 { 00:20:23.802 "method": "iobuf_set_options", 00:20:23.802 "params": { 00:20:23.802 "small_pool_count": 8192, 00:20:23.802 "large_pool_count": 1024, 00:20:23.802 "small_bufsize": 8192, 00:20:23.802 "large_bufsize": 135168 00:20:23.802 } 00:20:23.803 } 00:20:23.803 ] 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "subsystem": "sock", 00:20:23.803 "config": [ 00:20:23.803 { 00:20:23.803 "method": "sock_set_default_impl", 00:20:23.803 "params": { 00:20:23.803 "impl_name": "posix" 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "sock_impl_set_options", 00:20:23.803 "params": { 00:20:23.803 "impl_name": "ssl", 00:20:23.803 "recv_buf_size": 4096, 00:20:23.803 "send_buf_size": 4096, 00:20:23.803 "enable_recv_pipe": true, 00:20:23.803 "enable_quickack": false, 00:20:23.803 "enable_placement_id": 0, 00:20:23.803 "enable_zerocopy_send_server": true, 00:20:23.803 "enable_zerocopy_send_client": false, 00:20:23.803 "zerocopy_threshold": 0, 00:20:23.803 "tls_version": 0, 00:20:23.803 "enable_ktls": false 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "sock_impl_set_options", 00:20:23.803 "params": { 00:20:23.803 "impl_name": "posix", 00:20:23.803 "recv_buf_size": 2097152, 00:20:23.803 "send_buf_size": 2097152, 00:20:23.803 "enable_recv_pipe": true, 00:20:23.803 "enable_quickack": false, 00:20:23.803 "enable_placement_id": 0, 00:20:23.803 "enable_zerocopy_send_server": true, 00:20:23.803 "enable_zerocopy_send_client": false, 00:20:23.803 "zerocopy_threshold": 0, 00:20:23.803 "tls_version": 0, 00:20:23.803 "enable_ktls": false 00:20:23.803 } 00:20:23.803 } 00:20:23.803 ] 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "subsystem": "vmd", 00:20:23.803 "config": [] 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "subsystem": "accel", 00:20:23.803 "config": [ 00:20:23.803 { 00:20:23.803 "method": "accel_set_options", 00:20:23.803 "params": { 00:20:23.803 "small_cache_size": 128, 00:20:23.803 "large_cache_size": 16, 00:20:23.803 "task_count": 2048, 00:20:23.803 "sequence_count": 2048, 00:20:23.803 "buf_count": 2048 00:20:23.803 } 00:20:23.803 } 00:20:23.803 ] 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "subsystem": "bdev", 00:20:23.803 "config": [ 00:20:23.803 { 00:20:23.803 "method": "bdev_set_options", 00:20:23.803 "params": { 00:20:23.803 "bdev_io_pool_size": 65535, 00:20:23.803 "bdev_io_cache_size": 256, 00:20:23.803 "bdev_auto_examine": true, 00:20:23.803 "iobuf_small_cache_size": 128, 00:20:23.803 "iobuf_large_cache_size": 16 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "bdev_raid_set_options", 00:20:23.803 "params": { 00:20:23.803 "process_window_size_kb": 1024, 00:20:23.803 "process_max_bandwidth_mb_sec": 0 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "bdev_iscsi_set_options", 00:20:23.803 "params": { 00:20:23.803 "timeout_sec": 30 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "bdev_nvme_set_options", 00:20:23.803 "params": { 00:20:23.803 "action_on_timeout": "none", 00:20:23.803 "timeout_us": 0, 00:20:23.803 "timeout_admin_us": 0, 00:20:23.803 "keep_alive_timeout_ms": 10000, 00:20:23.803 "arbitration_burst": 0, 00:20:23.803 "low_priority_weight": 0, 00:20:23.803 "medium_priority_weight": 0, 00:20:23.803 "high_priority_weight": 0, 00:20:23.803 "nvme_adminq_poll_period_us": 10000, 00:20:23.803 "nvme_ioq_poll_period_us": 0, 00:20:23.803 "io_queue_requests": 0, 00:20:23.803 "delay_cmd_submit": true, 00:20:23.803 "transport_retry_count": 4, 00:20:23.803 "bdev_retry_count": 3, 00:20:23.803 "transport_ack_timeout": 0, 00:20:23.803 "ctrlr_loss_timeout_sec": 0, 00:20:23.803 "reconnect_delay_sec": 0, 00:20:23.803 "fast_io_fail_timeout_sec": 0, 00:20:23.803 "disable_auto_failback": false, 00:20:23.803 "generate_uuids": false, 00:20:23.803 "transport_tos": 0, 00:20:23.803 "nvme_error_stat": false, 00:20:23.803 "rdma_srq_size": 0, 00:20:23.803 "io_path_stat": false, 00:20:23.803 "allow_accel_sequence": false, 00:20:23.803 "rdma_max_cq_size": 0, 00:20:23.803 "rdma_cm_event_timeout_ms": 0, 00:20:23.803 "dhchap_digests": [ 00:20:23.803 "sha256", 00:20:23.803 "sha384", 00:20:23.803 "sha512" 00:20:23.803 ], 00:20:23.803 "dhchap_dhgroups": [ 00:20:23.803 "null", 00:20:23.803 "ffdhe2048", 00:20:23.803 "ffdhe3072", 00:20:23.803 "ffdhe4096", 00:20:23.803 "ffdhe6144", 00:20:23.803 "ffdhe8192" 00:20:23.803 ] 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "bdev_nvme_set_hotplug", 00:20:23.803 "params": { 00:20:23.803 "period_us": 100000, 00:20:23.803 "enable": false 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "bdev_malloc_create", 00:20:23.803 "params": { 00:20:23.803 "name": "malloc0", 00:20:23.803 "num_blocks": 8192, 00:20:23.803 "block_size": 4096, 00:20:23.803 "physical_block_size": 4096, 00:20:23.803 "uuid": "7989c84a-2895-4af3-b7ae-66a35d48d437", 00:20:23.803 "optimal_io_boundary": 0, 00:20:23.803 "md_size": 0, 00:20:23.803 "dif_type": 0, 00:20:23.803 "dif_is_head_of_md": false, 00:20:23.803 "dif_pi_format": 0 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "bdev_wait_for_examine" 00:20:23.803 } 00:20:23.803 ] 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "subsystem": "nbd", 00:20:23.803 "config": [] 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "subsystem": "scheduler", 00:20:23.803 "config": [ 00:20:23.803 { 00:20:23.803 "method": "framework_set_scheduler", 00:20:23.803 "params": { 00:20:23.803 "name": "static" 00:20:23.803 } 00:20:23.803 } 00:20:23.803 ] 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "subsystem": "nvmf", 00:20:23.803 "config": [ 00:20:23.803 { 00:20:23.803 "method": "nvmf_set_config", 00:20:23.803 "params": { 00:20:23.803 "discovery_filter": "match_any", 00:20:23.803 "admin_cmd_passthru": { 00:20:23.803 "identify_ctrlr": false 00:20:23.803 }, 00:20:23.803 "dhchap_digests": [ 00:20:23.803 "sha256", 00:20:23.803 "sha384", 00:20:23.803 "sha512" 00:20:23.803 ], 00:20:23.803 "dhchap_dhgroups": [ 00:20:23.803 "null", 00:20:23.803 "ffdhe2048", 00:20:23.803 "ffdhe3072", 00:20:23.803 "ffdhe4096", 00:20:23.803 "ffdhe6144", 00:20:23.803 "ffdhe8192" 00:20:23.803 ] 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "nvmf_set_max_subsystems", 00:20:23.803 "params": { 00:20:23.803 "max_subsystems": 1024 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "nvmf_set_crdt", 00:20:23.803 "params": { 00:20:23.803 "crdt1": 0, 00:20:23.803 "crdt2": 0, 00:20:23.803 "crdt3": 0 00:20:23.803 } 00:20:23.803 }, 00:20:23.803 { 00:20:23.803 "method": "nvmf_create_transport", 00:20:23.803 "params": { 00:20:23.803 "trtype": "TCP", 00:20:23.803 "max_queue_depth": 128, 00:20:23.803 "max_io_qpairs_per_ctrlr": 127, 00:20:23.803 "in_capsule_data_size": 4096, 00:20:23.803 "max_io_size": 131072, 00:20:23.803 "io_unit_size": 131072, 00:20:23.803 "max_aq_depth": 128, 00:20:23.803 "num_shared_buffers": 511, 00:20:23.803 "buf_cache_size": 4294967295, 00:20:23.803 "dif_insert_or_strip": false, 00:20:23.803 "zcopy": false, 00:20:23.803 "c2h_success": false, 00:20:23.803 "sock_priority": 0, 00:20:23.803 "abort_timeout_sec": 1, 00:20:23.804 "ack_timeout": 0, 00:20:23.804 "data_wr_pool_size": 0 00:20:23.804 } 00:20:23.804 }, 00:20:23.804 { 00:20:23.804 "method": "nvmf_create_subsystem", 00:20:23.804 "params": { 00:20:23.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.804 "allow_any_host": false, 00:20:23.804 "serial_number": "00000000000000000000", 00:20:23.804 "model_number": "SPDK bdev Controller", 00:20:23.804 "max_namespaces": 32, 00:20:23.804 "min_cntlid": 1, 00:20:23.804 "max_cntlid": 65519, 00:20:23.804 "ana_reporting": false 00:20:23.804 } 00:20:23.804 }, 00:20:23.804 { 00:20:23.804 "method": "nvmf_subsystem_add_host", 00:20:23.804 "params": { 00:20:23.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.804 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.804 "psk": "key0" 00:20:23.804 } 00:20:23.804 }, 00:20:23.804 { 00:20:23.804 "method": "nvmf_subsystem_add_ns", 00:20:23.804 "params": { 00:20:23.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.804 "namespace": { 00:20:23.804 "nsid": 1, 00:20:23.804 "bdev_name": "malloc0", 00:20:23.804 "nguid": "7989C84A28954AF3B7AE66A35D48D437", 00:20:23.804 "uuid": "7989c84a-2895-4af3-b7ae-66a35d48d437", 00:20:23.804 "no_auto_visible": false 00:20:23.804 } 00:20:23.804 } 00:20:23.804 }, 00:20:23.804 { 00:20:23.804 "method": "nvmf_subsystem_add_listener", 00:20:23.804 "params": { 00:20:23.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.804 "listen_address": { 00:20:23.804 "trtype": "TCP", 00:20:23.804 "adrfam": "IPv4", 00:20:23.804 "traddr": "10.0.0.2", 00:20:23.804 "trsvcid": "4420" 00:20:23.804 }, 00:20:23.804 "secure_channel": false, 00:20:23.804 "sock_impl": "ssl" 00:20:23.804 } 00:20:23.804 } 00:20:23.804 ] 00:20:23.804 } 00:20:23.804 ] 00:20:23.804 }' 00:20:23.804 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:24.064 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:24.064 "subsystems": [ 00:20:24.064 { 00:20:24.064 "subsystem": "keyring", 00:20:24.064 "config": [ 00:20:24.064 { 00:20:24.064 "method": "keyring_file_add_key", 00:20:24.064 "params": { 00:20:24.064 "name": "key0", 00:20:24.065 "path": "/tmp/tmp.Wl4L5Jh4qt" 00:20:24.065 } 00:20:24.065 } 00:20:24.065 ] 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "subsystem": "iobuf", 00:20:24.065 "config": [ 00:20:24.065 { 00:20:24.065 "method": "iobuf_set_options", 00:20:24.065 "params": { 00:20:24.065 "small_pool_count": 8192, 00:20:24.065 "large_pool_count": 1024, 00:20:24.065 "small_bufsize": 8192, 00:20:24.065 "large_bufsize": 135168 00:20:24.065 } 00:20:24.065 } 00:20:24.065 ] 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "subsystem": "sock", 00:20:24.065 "config": [ 00:20:24.065 { 00:20:24.065 "method": "sock_set_default_impl", 00:20:24.065 "params": { 00:20:24.065 "impl_name": "posix" 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "sock_impl_set_options", 00:20:24.065 "params": { 00:20:24.065 "impl_name": "ssl", 00:20:24.065 "recv_buf_size": 4096, 00:20:24.065 "send_buf_size": 4096, 00:20:24.065 "enable_recv_pipe": true, 00:20:24.065 "enable_quickack": false, 00:20:24.065 "enable_placement_id": 0, 00:20:24.065 "enable_zerocopy_send_server": true, 00:20:24.065 "enable_zerocopy_send_client": false, 00:20:24.065 "zerocopy_threshold": 0, 00:20:24.065 "tls_version": 0, 00:20:24.065 "enable_ktls": false 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "sock_impl_set_options", 00:20:24.065 "params": { 00:20:24.065 "impl_name": "posix", 00:20:24.065 "recv_buf_size": 2097152, 00:20:24.065 "send_buf_size": 2097152, 00:20:24.065 "enable_recv_pipe": true, 00:20:24.065 "enable_quickack": false, 00:20:24.065 "enable_placement_id": 0, 00:20:24.065 "enable_zerocopy_send_server": true, 00:20:24.065 "enable_zerocopy_send_client": false, 00:20:24.065 "zerocopy_threshold": 0, 00:20:24.065 "tls_version": 0, 00:20:24.065 "enable_ktls": false 00:20:24.065 } 00:20:24.065 } 00:20:24.065 ] 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "subsystem": "vmd", 00:20:24.065 "config": [] 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "subsystem": "accel", 00:20:24.065 "config": [ 00:20:24.065 { 00:20:24.065 "method": "accel_set_options", 00:20:24.065 "params": { 00:20:24.065 "small_cache_size": 128, 00:20:24.065 "large_cache_size": 16, 00:20:24.065 "task_count": 2048, 00:20:24.065 "sequence_count": 2048, 00:20:24.065 "buf_count": 2048 00:20:24.065 } 00:20:24.065 } 00:20:24.065 ] 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "subsystem": "bdev", 00:20:24.065 "config": [ 00:20:24.065 { 00:20:24.065 "method": "bdev_set_options", 00:20:24.065 "params": { 00:20:24.065 "bdev_io_pool_size": 65535, 00:20:24.065 "bdev_io_cache_size": 256, 00:20:24.065 "bdev_auto_examine": true, 00:20:24.065 "iobuf_small_cache_size": 128, 00:20:24.065 "iobuf_large_cache_size": 16 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "bdev_raid_set_options", 00:20:24.065 "params": { 00:20:24.065 "process_window_size_kb": 1024, 00:20:24.065 "process_max_bandwidth_mb_sec": 0 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "bdev_iscsi_set_options", 00:20:24.065 "params": { 00:20:24.065 "timeout_sec": 30 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "bdev_nvme_set_options", 00:20:24.065 "params": { 00:20:24.065 "action_on_timeout": "none", 00:20:24.065 "timeout_us": 0, 00:20:24.065 "timeout_admin_us": 0, 00:20:24.065 "keep_alive_timeout_ms": 10000, 00:20:24.065 "arbitration_burst": 0, 00:20:24.065 "low_priority_weight": 0, 00:20:24.065 "medium_priority_weight": 0, 00:20:24.065 "high_priority_weight": 0, 00:20:24.065 "nvme_adminq_poll_period_us": 10000, 00:20:24.065 "nvme_ioq_poll_period_us": 0, 00:20:24.065 "io_queue_requests": 512, 00:20:24.065 "delay_cmd_submit": true, 00:20:24.065 "transport_retry_count": 4, 00:20:24.065 "bdev_retry_count": 3, 00:20:24.065 "transport_ack_timeout": 0, 00:20:24.065 "ctrlr_loss_timeout_sec": 0, 00:20:24.065 "reconnect_delay_sec": 0, 00:20:24.065 "fast_io_fail_timeout_sec": 0, 00:20:24.065 "disable_auto_failback": false, 00:20:24.065 "generate_uuids": false, 00:20:24.065 "transport_tos": 0, 00:20:24.065 "nvme_error_stat": false, 00:20:24.065 "rdma_srq_size": 0, 00:20:24.065 "io_path_stat": false, 00:20:24.065 "allow_accel_sequence": false, 00:20:24.065 "rdma_max_cq_size": 0, 00:20:24.065 "rdma_cm_event_timeout_ms": 0, 00:20:24.065 "dhchap_digests": [ 00:20:24.065 "sha256", 00:20:24.065 "sha384", 00:20:24.065 "sha512" 00:20:24.065 ], 00:20:24.065 "dhchap_dhgroups": [ 00:20:24.065 "null", 00:20:24.065 "ffdhe2048", 00:20:24.065 "ffdhe3072", 00:20:24.065 "ffdhe4096", 00:20:24.065 "ffdhe6144", 00:20:24.065 "ffdhe8192" 00:20:24.065 ] 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "bdev_nvme_attach_controller", 00:20:24.065 "params": { 00:20:24.065 "name": "nvme0", 00:20:24.065 "trtype": "TCP", 00:20:24.065 "adrfam": "IPv4", 00:20:24.065 "traddr": "10.0.0.2", 00:20:24.065 "trsvcid": "4420", 00:20:24.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.065 "prchk_reftag": false, 00:20:24.065 "prchk_guard": false, 00:20:24.065 "ctrlr_loss_timeout_sec": 0, 00:20:24.065 "reconnect_delay_sec": 0, 00:20:24.065 "fast_io_fail_timeout_sec": 0, 00:20:24.065 "psk": "key0", 00:20:24.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.065 "hdgst": false, 00:20:24.065 "ddgst": false, 00:20:24.065 "multipath": "multipath" 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "bdev_nvme_set_hotplug", 00:20:24.065 "params": { 00:20:24.065 "period_us": 100000, 00:20:24.065 "enable": false 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "bdev_enable_histogram", 00:20:24.065 "params": { 00:20:24.065 "name": "nvme0n1", 00:20:24.065 "enable": true 00:20:24.065 } 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "method": "bdev_wait_for_examine" 00:20:24.065 } 00:20:24.065 ] 00:20:24.065 }, 00:20:24.065 { 00:20:24.065 "subsystem": "nbd", 00:20:24.065 "config": [] 00:20:24.065 } 00:20:24.065 ] 00:20:24.065 }' 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1953983 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1953983 ']' 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1953983 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1953983 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1953983' 00:20:24.066 killing process with pid 1953983 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1953983 00:20:24.066 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.066 00:20:24.066 Latency(us) 00:20:24.066 [2024-10-11T09:57:26.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.066 [2024-10-11T09:57:26.769Z] =================================================================================================================== 00:20:24.066 [2024-10-11T09:57:26.769Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.066 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1953983 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1953943 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1953943 ']' 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1953943 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1953943 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1953943' 00:20:24.326 killing process with pid 1953943 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1953943 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1953943 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.326 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:24.326 "subsystems": [ 00:20:24.326 { 00:20:24.326 "subsystem": "keyring", 00:20:24.326 "config": [ 00:20:24.326 { 00:20:24.326 "method": "keyring_file_add_key", 00:20:24.326 "params": { 00:20:24.326 "name": "key0", 00:20:24.326 "path": "/tmp/tmp.Wl4L5Jh4qt" 00:20:24.326 } 00:20:24.326 } 00:20:24.326 ] 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "subsystem": "iobuf", 00:20:24.326 "config": [ 00:20:24.326 { 00:20:24.326 "method": "iobuf_set_options", 00:20:24.326 "params": { 00:20:24.326 "small_pool_count": 8192, 00:20:24.326 "large_pool_count": 1024, 00:20:24.326 "small_bufsize": 8192, 00:20:24.326 "large_bufsize": 135168 00:20:24.326 } 00:20:24.326 } 00:20:24.326 ] 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "subsystem": "sock", 00:20:24.326 "config": [ 00:20:24.326 { 00:20:24.326 "method": "sock_set_default_impl", 00:20:24.326 "params": { 00:20:24.326 "impl_name": "posix" 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "sock_impl_set_options", 00:20:24.326 "params": { 00:20:24.326 "impl_name": "ssl", 00:20:24.326 "recv_buf_size": 4096, 00:20:24.326 "send_buf_size": 4096, 00:20:24.326 "enable_recv_pipe": true, 00:20:24.326 "enable_quickack": false, 00:20:24.326 "enable_placement_id": 0, 00:20:24.326 "enable_zerocopy_send_server": true, 00:20:24.326 "enable_zerocopy_send_client": false, 00:20:24.326 "zerocopy_threshold": 0, 00:20:24.326 "tls_version": 0, 00:20:24.326 "enable_ktls": false 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "sock_impl_set_options", 00:20:24.326 "params": { 00:20:24.326 "impl_name": "posix", 00:20:24.326 "recv_buf_size": 2097152, 00:20:24.326 "send_buf_size": 2097152, 00:20:24.326 "enable_recv_pipe": true, 00:20:24.326 "enable_quickack": false, 00:20:24.326 "enable_placement_id": 0, 00:20:24.326 "enable_zerocopy_send_server": true, 00:20:24.326 "enable_zerocopy_send_client": false, 00:20:24.326 "zerocopy_threshold": 0, 00:20:24.326 "tls_version": 0, 00:20:24.326 "enable_ktls": false 00:20:24.326 } 00:20:24.326 } 00:20:24.326 ] 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "subsystem": "vmd", 00:20:24.326 "config": [] 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "subsystem": "accel", 00:20:24.326 "config": [ 00:20:24.326 { 00:20:24.326 "method": "accel_set_options", 00:20:24.326 "params": { 00:20:24.326 "small_cache_size": 128, 00:20:24.326 "large_cache_size": 16, 00:20:24.326 "task_count": 2048, 00:20:24.326 "sequence_count": 2048, 00:20:24.326 "buf_count": 2048 00:20:24.326 } 00:20:24.326 } 00:20:24.326 ] 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "subsystem": "bdev", 00:20:24.326 "config": [ 00:20:24.326 { 00:20:24.326 "method": "bdev_set_options", 00:20:24.326 "params": { 00:20:24.326 "bdev_io_pool_size": 65535, 00:20:24.326 "bdev_io_cache_size": 256, 00:20:24.326 "bdev_auto_examine": true, 00:20:24.326 "iobuf_small_cache_size": 128, 00:20:24.326 "iobuf_large_cache_size": 16 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "bdev_raid_set_options", 00:20:24.326 "params": { 00:20:24.326 "process_window_size_kb": 1024, 00:20:24.326 "process_max_bandwidth_mb_sec": 0 00:20:24.326 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "bdev_iscsi_set_options", 00:20:24.327 "params": { 00:20:24.327 "timeout_sec": 30 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "bdev_nvme_set_options", 00:20:24.327 "params": { 00:20:24.327 "action_on_timeout": "none", 00:20:24.327 "timeout_us": 0, 00:20:24.327 "timeout_admin_us": 0, 00:20:24.327 "keep_alive_timeout_ms": 10000, 00:20:24.327 "arbitration_burst": 0, 00:20:24.327 "low_priority_weight": 0, 00:20:24.327 "medium_priority_weight": 0, 00:20:24.327 "high_priority_weight": 0, 00:20:24.327 "nvme_adminq_poll_period_us": 10000, 00:20:24.327 "nvme_ioq_poll_period_us": 0, 00:20:24.327 "io_queue_requests": 0, 00:20:24.327 "delay_cmd_submit": true, 00:20:24.327 "transport_retry_count": 4, 00:20:24.327 "bdev_retry_count": 3, 00:20:24.327 "transport_ack_timeout": 0, 00:20:24.327 "ctrlr_loss_timeout_sec": 0, 00:20:24.327 "reconnect_delay_sec": 0, 00:20:24.327 "fast_io_fail_timeout_sec": 0, 00:20:24.327 "disable_auto_failback": false, 00:20:24.327 "generate_uuids": false, 00:20:24.327 "transport_tos": 0, 00:20:24.327 "nvme_error_stat": false, 00:20:24.327 "rdma_srq_size": 0, 00:20:24.327 "io_path_stat": false, 00:20:24.327 "allow_accel_sequence": false, 00:20:24.327 "rdma_max_cq_size": 0, 00:20:24.327 "rdma_cm_event_timeout_ms": 0, 00:20:24.327 "dhchap_digests": [ 00:20:24.327 "sha256", 00:20:24.327 "sha384", 00:20:24.327 "sha512" 00:20:24.327 ], 00:20:24.327 "dhchap_dhgroups": [ 00:20:24.327 "null", 00:20:24.327 "ffdhe2048", 00:20:24.327 "ffdhe3072", 00:20:24.327 "ffdhe4096", 00:20:24.327 "ffdhe6144", 00:20:24.327 "ffdhe8192" 00:20:24.327 ] 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "bdev_nvme_set_hotplug", 00:20:24.327 "params": { 00:20:24.327 "period_us": 100000, 00:20:24.327 "enable": false 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "bdev_malloc_create", 00:20:24.327 "params": { 00:20:24.327 "name": "malloc0", 00:20:24.327 "num_blocks": 8192, 00:20:24.327 "block_size": 4096, 00:20:24.327 "physical_block_size": 4096, 00:20:24.327 "uuid": "7989c84a-2895-4af3-b7ae-66a35d48d437", 00:20:24.327 "optimal_io_boundary": 0, 00:20:24.327 "md_size": 0, 00:20:24.327 "dif_type": 0, 00:20:24.327 "dif_is_head_of_md": false, 00:20:24.327 "dif_pi_format": 0 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "bdev_wait_for_examine" 00:20:24.327 } 00:20:24.327 ] 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "subsystem": "nbd", 00:20:24.327 "config": [] 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "subsystem": "scheduler", 00:20:24.327 "config": [ 00:20:24.327 { 00:20:24.327 "method": "framework_set_scheduler", 00:20:24.327 "params": { 00:20:24.327 "name": "static" 00:20:24.327 } 00:20:24.327 } 00:20:24.327 ] 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "subsystem": "nvmf", 00:20:24.327 "config": [ 00:20:24.327 { 00:20:24.327 "method": "nvmf_set_config", 00:20:24.327 "params": { 00:20:24.327 "discovery_filter": "match_any", 00:20:24.327 "admin_cmd_passthru": { 00:20:24.327 "identify_ctrlr": false 00:20:24.327 }, 00:20:24.327 "dhchap_digests": [ 00:20:24.327 "sha256", 00:20:24.327 "sha384", 00:20:24.327 "sha512" 00:20:24.327 ], 00:20:24.327 "dhchap_dhgroups": [ 00:20:24.327 "null", 00:20:24.327 "ffdhe2048", 00:20:24.327 "ffdhe3072", 00:20:24.327 "ffdhe4096", 00:20:24.327 "ffdhe6144", 00:20:24.327 "ffdhe8192" 00:20:24.327 ] 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "nvmf_set_max_subsystems", 00:20:24.327 "params": { 00:20:24.327 "max_subsystems": 1024 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "nvmf_set_crdt", 00:20:24.327 "params": { 00:20:24.327 "crdt1": 0, 00:20:24.327 "crdt2": 0, 00:20:24.327 "crdt3": 0 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "nvmf_create_transport", 00:20:24.327 "params": { 00:20:24.327 "trtype": "TCP", 00:20:24.327 "max_queue_depth": 128, 00:20:24.327 "max_io_qpairs_per_ctrlr": 127, 00:20:24.327 "in_capsule_data_size": 4096, 00:20:24.327 "max_io_size": 131072, 00:20:24.327 "io_unit_size": 131072, 00:20:24.327 "max_aq_depth": 128, 00:20:24.327 "num_shared_buffers": 511, 00:20:24.327 "buf_cache_size": 4294967295, 00:20:24.327 "dif_insert_or_strip": false, 00:20:24.327 "zcopy": false, 00:20:24.327 "c2h_success": false, 00:20:24.327 "sock_priority": 0, 00:20:24.327 "abort_timeout_sec": 1, 00:20:24.327 "ack_timeout": 0, 00:20:24.327 "data_wr_pool_size": 0 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "nvmf_create_subsystem", 00:20:24.327 "params": { 00:20:24.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.327 "allow_any_host": false, 00:20:24.327 "serial_number": "00000000000000000000", 00:20:24.327 "model_number": "SPDK bdev Controller", 00:20:24.327 "max_namespaces": 32, 00:20:24.327 "min_cntlid": 1, 00:20:24.327 "max_cntlid": 65519, 00:20:24.327 "ana_reporting": false 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "nvmf_subsystem_add_host", 00:20:24.327 "params": { 00:20:24.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.327 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.327 "psk": "key0" 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "nvmf_subsystem_add_ns", 00:20:24.327 "params": { 00:20:24.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.327 "namespace": { 00:20:24.327 "nsid": 1, 00:20:24.327 "bdev_name": "malloc0", 00:20:24.327 "nguid": "7989C84A28954AF3B7AE66A35D48D437", 00:20:24.327 "uuid": "7989c84a-2895-4af3-b7ae-66a35d48d437", 00:20:24.327 "no_auto_visible": false 00:20:24.327 } 00:20:24.327 } 00:20:24.327 }, 00:20:24.327 { 00:20:24.327 "method": "nvmf_subsystem_add_listener", 00:20:24.327 "params": { 00:20:24.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.327 "listen_address": { 00:20:24.327 "trtype": "TCP", 00:20:24.327 "adrfam": "IPv4", 00:20:24.327 "traddr": "10.0.0.2", 00:20:24.327 "trsvcid": "4420" 00:20:24.327 }, 00:20:24.327 "secure_channel": false, 00:20:24.327 "sock_impl": "ssl" 00:20:24.327 } 00:20:24.327 } 00:20:24.327 ] 00:20:24.327 } 00:20:24.327 ] 00:20:24.327 }' 00:20:24.327 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.327 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1954655 00:20:24.327 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1954655 00:20:24.327 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:24.327 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1954655 ']' 00:20:24.327 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.327 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.327 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.327 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.328 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.588 [2024-10-11 11:57:27.048699] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:24.588 [2024-10-11 11:57:27.048754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.588 [2024-10-11 11:57:27.129928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.588 [2024-10-11 11:57:27.161500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.588 [2024-10-11 11:57:27.161527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.588 [2024-10-11 11:57:27.161532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.588 [2024-10-11 11:57:27.161537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.588 [2024-10-11 11:57:27.161541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.588 [2024-10-11 11:57:27.162039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.848 [2024-10-11 11:57:27.355653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.848 [2024-10-11 11:57:27.387684] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.848 [2024-10-11 11:57:27.387883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1954959 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1954959 /var/tmp/bdevperf.sock 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1954959 ']' 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.420 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:25.420 "subsystems": [ 00:20:25.420 { 00:20:25.420 "subsystem": "keyring", 00:20:25.420 "config": [ 00:20:25.420 { 00:20:25.420 "method": "keyring_file_add_key", 00:20:25.420 "params": { 00:20:25.420 "name": "key0", 00:20:25.420 "path": "/tmp/tmp.Wl4L5Jh4qt" 00:20:25.420 } 00:20:25.420 } 00:20:25.420 ] 00:20:25.420 }, 00:20:25.420 { 00:20:25.420 "subsystem": "iobuf", 00:20:25.420 "config": [ 00:20:25.420 { 00:20:25.420 "method": "iobuf_set_options", 00:20:25.420 "params": { 00:20:25.420 "small_pool_count": 8192, 00:20:25.420 "large_pool_count": 1024, 00:20:25.420 "small_bufsize": 8192, 00:20:25.420 "large_bufsize": 135168 00:20:25.420 } 00:20:25.420 } 00:20:25.420 ] 00:20:25.420 }, 00:20:25.420 { 00:20:25.420 "subsystem": "sock", 00:20:25.420 "config": [ 00:20:25.420 { 00:20:25.420 "method": "sock_set_default_impl", 00:20:25.420 "params": { 00:20:25.420 "impl_name": "posix" 00:20:25.420 } 00:20:25.420 }, 00:20:25.420 { 00:20:25.420 "method": "sock_impl_set_options", 00:20:25.420 "params": { 00:20:25.420 "impl_name": "ssl", 00:20:25.420 "recv_buf_size": 4096, 00:20:25.420 "send_buf_size": 4096, 00:20:25.420 "enable_recv_pipe": true, 00:20:25.420 "enable_quickack": false, 00:20:25.420 "enable_placement_id": 0, 00:20:25.420 "enable_zerocopy_send_server": true, 00:20:25.420 "enable_zerocopy_send_client": false, 00:20:25.420 "zerocopy_threshold": 0, 00:20:25.420 "tls_version": 0, 00:20:25.420 "enable_ktls": false 00:20:25.420 } 00:20:25.420 }, 00:20:25.420 { 00:20:25.420 "method": "sock_impl_set_options", 00:20:25.420 "params": { 00:20:25.420 "impl_name": "posix", 00:20:25.420 "recv_buf_size": 2097152, 00:20:25.420 "send_buf_size": 2097152, 00:20:25.420 "enable_recv_pipe": true, 00:20:25.420 "enable_quickack": false, 00:20:25.420 "enable_placement_id": 0, 00:20:25.420 "enable_zerocopy_send_server": true, 00:20:25.420 "enable_zerocopy_send_client": false, 00:20:25.420 "zerocopy_threshold": 0, 00:20:25.420 "tls_version": 0, 00:20:25.420 "enable_ktls": false 00:20:25.420 } 00:20:25.420 } 00:20:25.420 ] 00:20:25.420 }, 00:20:25.420 { 00:20:25.420 "subsystem": "vmd", 00:20:25.420 "config": [] 00:20:25.420 }, 00:20:25.420 { 00:20:25.420 "subsystem": "accel", 00:20:25.420 "config": [ 00:20:25.420 { 00:20:25.420 "method": "accel_set_options", 00:20:25.420 "params": { 00:20:25.420 "small_cache_size": 128, 00:20:25.420 "large_cache_size": 16, 00:20:25.420 "task_count": 2048, 00:20:25.420 "sequence_count": 2048, 00:20:25.420 "buf_count": 2048 00:20:25.421 } 00:20:25.421 } 00:20:25.421 ] 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "subsystem": "bdev", 00:20:25.421 "config": [ 00:20:25.421 { 00:20:25.421 "method": "bdev_set_options", 00:20:25.421 "params": { 00:20:25.421 "bdev_io_pool_size": 65535, 00:20:25.421 "bdev_io_cache_size": 256, 00:20:25.421 "bdev_auto_examine": true, 00:20:25.421 "iobuf_small_cache_size": 128, 00:20:25.421 "iobuf_large_cache_size": 16 00:20:25.421 } 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "method": "bdev_raid_set_options", 00:20:25.421 "params": { 00:20:25.421 "process_window_size_kb": 1024, 00:20:25.421 "process_max_bandwidth_mb_sec": 0 00:20:25.421 } 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "method": "bdev_iscsi_set_options", 00:20:25.421 "params": { 00:20:25.421 "timeout_sec": 30 00:20:25.421 } 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "method": "bdev_nvme_set_options", 00:20:25.421 "params": { 00:20:25.421 "action_on_timeout": "none", 00:20:25.421 "timeout_us": 0, 00:20:25.421 "timeout_admin_us": 0, 00:20:25.421 "keep_alive_timeout_ms": 10000, 00:20:25.421 "arbitration_burst": 0, 00:20:25.421 "low_priority_weight": 0, 00:20:25.421 "medium_priority_weight": 0, 00:20:25.421 "high_priority_weight": 0, 00:20:25.421 "nvme_adminq_poll_period_us": 10000, 00:20:25.421 "nvme_ioq_poll_period_us": 0, 00:20:25.421 "io_queue_requests": 512, 00:20:25.421 "delay_cmd_submit": true, 00:20:25.421 "transport_retry_count": 4, 00:20:25.421 "bdev_retry_count": 3, 00:20:25.421 "transport_ack_timeout": 0, 00:20:25.421 "ctrlr_loss_timeout_sec": 0, 00:20:25.421 "reconnect_delay_sec": 0, 00:20:25.421 "fast_io_fail_timeout_sec": 0, 00:20:25.421 "disable_auto_failback": false, 00:20:25.421 "generate_uuids": false, 00:20:25.421 "transport_tos": 0, 00:20:25.421 "nvme_error_stat": false, 00:20:25.421 "rdma_srq_size": 0, 00:20:25.421 "io_path_stat": false, 00:20:25.421 "allow_accel_sequence": false, 00:20:25.421 "rdma_max_cq_size": 0, 00:20:25.421 "rdma_cm_event_timeout_ms": 0, 00:20:25.421 "dhchap_digests": [ 00:20:25.421 "sha256", 00:20:25.421 "sha384", 00:20:25.421 "sha512" 00:20:25.421 ], 00:20:25.421 "dhchap_dhgroups": [ 00:20:25.421 "null", 00:20:25.421 "ffdhe2048", 00:20:25.421 "ffdhe3072", 00:20:25.421 "ffdhe4096", 00:20:25.421 "ffdhe6144", 00:20:25.421 "ffdhe8192" 00:20:25.421 ] 00:20:25.421 } 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "method": "bdev_nvme_attach_controller", 00:20:25.421 "params": { 00:20:25.421 "name": "nvme0", 00:20:25.421 "trtype": "TCP", 00:20:25.421 "adrfam": "IPv4", 00:20:25.421 "traddr": "10.0.0.2", 00:20:25.421 "trsvcid": "4420", 00:20:25.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.421 "prchk_reftag": false, 00:20:25.421 "prchk_guard": false, 00:20:25.421 "ctrlr_loss_timeout_sec": 0, 00:20:25.421 "reconnect_delay_sec": 0, 00:20:25.421 "fast_io_fail_timeout_sec": 0, 00:20:25.421 "psk": "key0", 00:20:25.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.421 "hdgst": false, 00:20:25.421 "ddgst": false, 00:20:25.421 "multipath": "multipath" 00:20:25.421 } 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "method": "bdev_nvme_set_hotplug", 00:20:25.421 "params": { 00:20:25.421 "period_us": 100000, 00:20:25.421 "enable": false 00:20:25.421 } 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "method": "bdev_enable_histogram", 00:20:25.421 "params": { 00:20:25.421 "name": "nvme0n1", 00:20:25.421 "enable": true 00:20:25.421 } 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "method": "bdev_wait_for_examine" 00:20:25.421 } 00:20:25.421 ] 00:20:25.421 }, 00:20:25.421 { 00:20:25.421 "subsystem": "nbd", 00:20:25.421 "config": [] 00:20:25.421 } 00:20:25.421 ] 00:20:25.421 }' 00:20:25.421 [2024-10-11 11:57:27.922213] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:25.421 [2024-10-11 11:57:27.922265] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954959 ] 00:20:25.421 [2024-10-11 11:57:27.996998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.421 [2024-10-11 11:57:28.026844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.682 [2024-10-11 11:57:28.161725] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.253 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.253 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:26.253 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:26.253 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:26.253 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.253 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.513 Running I/O for 1 seconds... 00:20:27.455 5768.00 IOPS, 22.53 MiB/s 00:20:27.455 Latency(us) 00:20:27.455 [2024-10-11T09:57:30.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.455 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:27.455 Verification LBA range: start 0x0 length 0x2000 00:20:27.455 nvme0n1 : 1.01 5820.95 22.74 0.00 0.00 21852.76 4887.89 27634.35 00:20:27.455 [2024-10-11T09:57:30.158Z] =================================================================================================================== 00:20:27.455 [2024-10-11T09:57:30.158Z] Total : 5820.95 22.74 0.00 0.00 21852.76 4887.89 27634.35 00:20:27.455 { 00:20:27.456 "results": [ 00:20:27.456 { 00:20:27.456 "job": "nvme0n1", 00:20:27.456 "core_mask": "0x2", 00:20:27.456 "workload": "verify", 00:20:27.456 "status": "finished", 00:20:27.456 "verify_range": { 00:20:27.456 "start": 0, 00:20:27.456 "length": 8192 00:20:27.456 }, 00:20:27.456 "queue_depth": 128, 00:20:27.456 "io_size": 4096, 00:20:27.456 "runtime": 1.012893, 00:20:27.456 "iops": 5820.950485391843, 00:20:27.456 "mibps": 22.738087833561888, 00:20:27.456 "io_failed": 0, 00:20:27.456 "io_timeout": 0, 00:20:27.456 "avg_latency_us": 21852.76208050656, 00:20:27.456 "min_latency_us": 4887.893333333333, 00:20:27.456 "max_latency_us": 27634.346666666668 00:20:27.456 } 00:20:27.456 ], 00:20:27.456 "core_count": 1 00:20:27.456 } 00:20:27.456 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:27.456 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:27.456 nvmf_trace.0 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1954959 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1954959 ']' 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1954959 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.456 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1954959 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1954959' 00:20:27.717 killing process with pid 1954959 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1954959 00:20:27.717 Received shutdown signal, test time was about 1.000000 seconds 00:20:27.717 00:20:27.717 Latency(us) 00:20:27.717 [2024-10-11T09:57:30.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.717 [2024-10-11T09:57:30.420Z] =================================================================================================================== 00:20:27.717 [2024-10-11T09:57:30.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1954959 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.717 rmmod nvme_tcp 00:20:27.717 rmmod nvme_fabrics 00:20:27.717 rmmod nvme_keyring 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1954655 ']' 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1954655 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1954655 ']' 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1954655 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1954655 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1954655' 00:20:27.717 killing process with pid 1954655 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1954655 00:20:27.717 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1954655 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.978 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.CrouAOUbAB /tmp/tmp.2BfsVvdMmK /tmp/tmp.Wl4L5Jh4qt 00:20:30.012 00:20:30.012 real 1m27.444s 00:20:30.012 user 2m18.165s 00:20:30.012 sys 0m26.521s 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.012 ************************************ 00:20:30.012 END TEST nvmf_tls 00:20:30.012 ************************************ 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.012 ************************************ 00:20:30.012 START TEST nvmf_fips 00:20:30.012 ************************************ 00:20:30.012 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.275 * Looking for test storage... 00:20:30.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:30.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.275 --rc genhtml_branch_coverage=1 00:20:30.275 --rc genhtml_function_coverage=1 00:20:30.275 --rc genhtml_legend=1 00:20:30.275 --rc geninfo_all_blocks=1 00:20:30.275 --rc geninfo_unexecuted_blocks=1 00:20:30.275 00:20:30.275 ' 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:30.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.275 --rc genhtml_branch_coverage=1 00:20:30.275 --rc genhtml_function_coverage=1 00:20:30.275 --rc genhtml_legend=1 00:20:30.275 --rc geninfo_all_blocks=1 00:20:30.275 --rc geninfo_unexecuted_blocks=1 00:20:30.275 00:20:30.275 ' 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:30.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.275 --rc genhtml_branch_coverage=1 00:20:30.275 --rc genhtml_function_coverage=1 00:20:30.275 --rc genhtml_legend=1 00:20:30.275 --rc geninfo_all_blocks=1 00:20:30.275 --rc geninfo_unexecuted_blocks=1 00:20:30.275 00:20:30.275 ' 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:30.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.275 --rc genhtml_branch_coverage=1 00:20:30.275 --rc genhtml_function_coverage=1 00:20:30.275 --rc genhtml_legend=1 00:20:30.275 --rc geninfo_all_blocks=1 00:20:30.275 --rc geninfo_unexecuted_blocks=1 00:20:30.275 00:20:30.275 ' 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.275 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:30.276 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:30.538 11:57:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.538 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:30.539 Error setting digest 00:20:30.539 40B2CC6C117F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:30.539 40B2CC6C117F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.539 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.683 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:38.684 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:38.684 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:38.684 Found net devices under 0000:31:00.0: cvl_0_0 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:38.684 Found net devices under 0000:31:00.1: cvl_0_1 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:20:38.684 00:20:38.684 --- 10.0.0.2 ping statistics --- 00:20:38.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.684 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:20:38.684 00:20:38.684 --- 10.0.0.1 ping statistics --- 00:20:38.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.684 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1959785 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1959785 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1959785 ']' 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.684 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.685 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.685 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.685 [2024-10-11 11:57:40.920906] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:38.685 [2024-10-11 11:57:40.920983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.685 [2024-10-11 11:57:41.012937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.685 [2024-10-11 11:57:41.062665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.685 [2024-10-11 11:57:41.062719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.685 [2024-10-11 11:57:41.062727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.685 [2024-10-11 11:57:41.062735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.685 [2024-10-11 11:57:41.062741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.685 [2024-10-11 11:57:41.063548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.FqK 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.FqK 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.FqK 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.FqK 00:20:39.257 11:57:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:39.257 [2024-10-11 11:57:41.948795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.519 [2024-10-11 11:57:41.964797] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.519 [2024-10-11 11:57:41.965147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.519 malloc0 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1960054 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1960054 /var/tmp/bdevperf.sock 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1960054 ']' 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.519 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:39.519 [2024-10-11 11:57:42.110999] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:39.519 [2024-10-11 11:57:42.111083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1960054 ] 00:20:39.519 [2024-10-11 11:57:42.196585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.780 [2024-10-11 11:57:42.247611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.351 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.351 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:40.351 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.FqK 00:20:40.612 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.612 [2024-10-11 11:57:43.264582] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.874 TLSTESTn1 00:20:40.874 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.874 Running I/O for 10 seconds... 00:20:43.196 4104.00 IOPS, 16.03 MiB/s [2024-10-11T09:57:46.838Z] 4671.00 IOPS, 18.25 MiB/s [2024-10-11T09:57:47.778Z] 5169.33 IOPS, 20.19 MiB/s [2024-10-11T09:57:48.718Z] 5147.00 IOPS, 20.11 MiB/s [2024-10-11T09:57:49.658Z] 5268.20 IOPS, 20.58 MiB/s [2024-10-11T09:57:50.598Z] 5397.00 IOPS, 21.08 MiB/s [2024-10-11T09:57:51.600Z] 5520.71 IOPS, 21.57 MiB/s [2024-10-11T09:57:52.541Z] 5594.38 IOPS, 21.85 MiB/s [2024-10-11T09:57:53.927Z] 5524.22 IOPS, 21.58 MiB/s [2024-10-11T09:57:53.927Z] 5597.60 IOPS, 21.87 MiB/s 00:20:51.224 Latency(us) 00:20:51.224 [2024-10-11T09:57:53.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.224 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:51.224 Verification LBA range: start 0x0 length 0x2000 00:20:51.224 TLSTESTn1 : 10.04 5587.37 21.83 0.00 0.00 22848.95 5870.93 94371.84 00:20:51.224 [2024-10-11T09:57:53.927Z] =================================================================================================================== 00:20:51.224 [2024-10-11T09:57:53.927Z] Total : 5587.37 21.83 0.00 0.00 22848.95 5870.93 94371.84 00:20:51.224 { 00:20:51.224 "results": [ 00:20:51.224 { 00:20:51.224 "job": "TLSTESTn1", 00:20:51.224 "core_mask": "0x4", 00:20:51.224 "workload": "verify", 00:20:51.224 "status": "finished", 00:20:51.224 "verify_range": { 00:20:51.224 "start": 0, 00:20:51.224 "length": 8192 00:20:51.224 }, 00:20:51.224 "queue_depth": 128, 00:20:51.224 "io_size": 4096, 00:20:51.224 "runtime": 10.041035, 00:20:51.224 "iops": 5587.372218103014, 00:20:51.224 "mibps": 21.8256727269649, 00:20:51.224 "io_failed": 0, 00:20:51.224 "io_timeout": 0, 00:20:51.224 "avg_latency_us": 22848.950313055157, 00:20:51.224 "min_latency_us": 5870.933333333333, 00:20:51.224 "max_latency_us": 94371.84 00:20:51.224 } 00:20:51.224 ], 00:20:51.224 "core_count": 1 00:20:51.224 } 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:51.224 nvmf_trace.0 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1960054 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1960054 ']' 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1960054 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1960054 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1960054' 00:20:51.224 killing process with pid 1960054 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1960054 00:20:51.224 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.224 00:20:51.224 Latency(us) 00:20:51.224 [2024-10-11T09:57:53.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.224 [2024-10-11T09:57:53.927Z] =================================================================================================================== 00:20:51.224 [2024-10-11T09:57:53.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1960054 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.224 rmmod nvme_tcp 00:20:51.224 rmmod nvme_fabrics 00:20:51.224 rmmod nvme_keyring 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:51.224 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:51.225 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1959785 ']' 00:20:51.225 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1959785 00:20:51.225 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1959785 ']' 00:20:51.225 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1959785 00:20:51.225 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:51.225 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:51.225 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1959785 00:20:51.485 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:51.485 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:51.485 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1959785' 00:20:51.485 killing process with pid 1959785 00:20:51.485 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1959785 00:20:51.486 11:57:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1959785 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.486 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.FqK 00:20:54.032 00:20:54.032 real 0m23.487s 00:20:54.032 user 0m24.934s 00:20:54.032 sys 0m9.940s 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:54.032 ************************************ 00:20:54.032 END TEST nvmf_fips 00:20:54.032 ************************************ 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:54.032 ************************************ 00:20:54.032 START TEST nvmf_control_msg_list 00:20:54.032 ************************************ 00:20:54.032 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:54.032 * Looking for test storage... 00:20:54.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.033 --rc genhtml_branch_coverage=1 00:20:54.033 --rc genhtml_function_coverage=1 00:20:54.033 --rc genhtml_legend=1 00:20:54.033 --rc geninfo_all_blocks=1 00:20:54.033 --rc geninfo_unexecuted_blocks=1 00:20:54.033 00:20:54.033 ' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.033 --rc genhtml_branch_coverage=1 00:20:54.033 --rc genhtml_function_coverage=1 00:20:54.033 --rc genhtml_legend=1 00:20:54.033 --rc geninfo_all_blocks=1 00:20:54.033 --rc geninfo_unexecuted_blocks=1 00:20:54.033 00:20:54.033 ' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.033 --rc genhtml_branch_coverage=1 00:20:54.033 --rc genhtml_function_coverage=1 00:20:54.033 --rc genhtml_legend=1 00:20:54.033 --rc geninfo_all_blocks=1 00:20:54.033 --rc geninfo_unexecuted_blocks=1 00:20:54.033 00:20:54.033 ' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.033 --rc genhtml_branch_coverage=1 00:20:54.033 --rc genhtml_function_coverage=1 00:20:54.033 --rc genhtml_legend=1 00:20:54.033 --rc geninfo_all_blocks=1 00:20:54.033 --rc geninfo_unexecuted_blocks=1 00:20:54.033 00:20:54.033 ' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:54.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.033 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:54.034 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.177 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:02.178 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:02.178 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:02.178 Found net devices under 0000:31:00.0: cvl_0_0 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:02.178 Found net devices under 0000:31:00.1: cvl_0_1 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.178 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:21:02.178 00:21:02.178 --- 10.0.0.2 ping statistics --- 00:21:02.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.178 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:21:02.178 00:21:02.178 --- 10.0.0.1 ping statistics --- 00:21:02.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.178 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1966560 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1966560 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1966560 ']' 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.178 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.178 [2024-10-11 11:58:04.229629] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:02.178 [2024-10-11 11:58:04.229691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.178 [2024-10-11 11:58:04.318491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.178 [2024-10-11 11:58:04.369162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.178 [2024-10-11 11:58:04.369209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.179 [2024-10-11 11:58:04.369218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.179 [2024-10-11 11:58:04.369225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.179 [2024-10-11 11:58:04.369238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.179 [2024-10-11 11:58:04.370035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.440 [2024-10-11 11:58:05.085226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.440 Malloc0 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.440 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.440 [2024-10-11 11:58:05.139697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.701 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.701 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1966872 00:21:02.701 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.701 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1966874 00:21:02.701 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.701 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1966876 00:21:02.701 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1966872 00:21:02.701 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.701 [2024-10-11 11:58:05.230621] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.701 [2024-10-11 11:58:05.230910] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.701 [2024-10-11 11:58:05.231245] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:03.646 Initializing NVMe Controllers 00:21:03.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:03.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:03.646 Initialization complete. Launching workers. 00:21:03.646 ======================================================== 00:21:03.646 Latency(us) 00:21:03.646 Device Information : IOPS MiB/s Average min max 00:21:03.646 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 845.00 3.30 1183.88 148.02 42054.36 00:21:03.646 ======================================================== 00:21:03.646 Total : 845.00 3.30 1183.88 148.02 42054.36 00:21:03.646 00:21:03.907 Initializing NVMe Controllers 00:21:03.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:03.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:03.907 Initialization complete. Launching workers. 00:21:03.907 ======================================================== 00:21:03.907 Latency(us) 00:21:03.907 Device Information : IOPS MiB/s Average min max 00:21:03.907 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1460.00 5.70 684.79 164.51 923.93 00:21:03.907 ======================================================== 00:21:03.907 Total : 1460.00 5.70 684.79 164.51 923.93 00:21:03.907 00:21:03.907 Initializing NVMe Controllers 00:21:03.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:03.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:03.907 Initialization complete. Launching workers. 00:21:03.907 ======================================================== 00:21:03.907 Latency(us) 00:21:03.907 Device Information : IOPS MiB/s Average min max 00:21:03.907 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 659.00 2.57 1568.13 188.76 41453.21 00:21:03.907 ======================================================== 00:21:03.907 Total : 659.00 2.57 1568.13 188.76 41453.21 00:21:03.907 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1966874 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1966876 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.907 rmmod nvme_tcp 00:21:03.907 rmmod nvme_fabrics 00:21:03.907 rmmod nvme_keyring 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1966560 ']' 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1966560 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1966560 ']' 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1966560 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1966560 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1966560' 00:21:03.907 killing process with pid 1966560 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1966560 00:21:03.907 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1966560 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.169 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.718 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:06.718 00:21:06.718 real 0m12.597s 00:21:06.718 user 0m8.037s 00:21:06.718 sys 0m6.623s 00:21:06.718 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:06.718 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.718 ************************************ 00:21:06.718 END TEST nvmf_control_msg_list 00:21:06.718 ************************************ 00:21:06.718 11:58:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:06.718 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:06.718 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:06.718 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:06.718 ************************************ 00:21:06.718 START TEST nvmf_wait_for_buf 00:21:06.718 ************************************ 00:21:06.718 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:06.718 * Looking for test storage... 00:21:06.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:06.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.718 --rc genhtml_branch_coverage=1 00:21:06.718 --rc genhtml_function_coverage=1 00:21:06.718 --rc genhtml_legend=1 00:21:06.718 --rc geninfo_all_blocks=1 00:21:06.718 --rc geninfo_unexecuted_blocks=1 00:21:06.718 00:21:06.718 ' 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:06.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.718 --rc genhtml_branch_coverage=1 00:21:06.718 --rc genhtml_function_coverage=1 00:21:06.718 --rc genhtml_legend=1 00:21:06.718 --rc geninfo_all_blocks=1 00:21:06.718 --rc geninfo_unexecuted_blocks=1 00:21:06.718 00:21:06.718 ' 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:06.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.718 --rc genhtml_branch_coverage=1 00:21:06.718 --rc genhtml_function_coverage=1 00:21:06.718 --rc genhtml_legend=1 00:21:06.718 --rc geninfo_all_blocks=1 00:21:06.718 --rc geninfo_unexecuted_blocks=1 00:21:06.718 00:21:06.718 ' 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:06.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.718 --rc genhtml_branch_coverage=1 00:21:06.718 --rc genhtml_function_coverage=1 00:21:06.718 --rc genhtml_legend=1 00:21:06.718 --rc geninfo_all_blocks=1 00:21:06.718 --rc geninfo_unexecuted_blocks=1 00:21:06.718 00:21:06.718 ' 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.718 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.719 11:58:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:14.870 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:14.870 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:14.870 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:14.871 Found net devices under 0000:31:00.0: cvl_0_0 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:14.871 Found net devices under 0000:31:00.1: cvl_0_1 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:14.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:21:14.871 00:21:14.871 --- 10.0.0.2 ping statistics --- 00:21:14.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.871 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:21:14.871 00:21:14.871 --- 10.0.0.1 ping statistics --- 00:21:14.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.871 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1971312 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1971312 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1971312 ']' 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.871 11:58:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.871 [2024-10-11 11:58:16.958010] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:14.871 [2024-10-11 11:58:16.958084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.871 [2024-10-11 11:58:17.050661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.871 [2024-10-11 11:58:17.102129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.871 [2024-10-11 11:58:17.102178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.871 [2024-10-11 11:58:17.102187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.871 [2024-10-11 11:58:17.102195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.871 [2024-10-11 11:58:17.102201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.871 [2024-10-11 11:58:17.103050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.136 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.397 Malloc0 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.397 [2024-10-11 11:58:17.951491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.397 [2024-10-11 11:58:17.987815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.397 11:58:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:15.397 [2024-10-11 11:58:18.072190] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:16.785 Initializing NVMe Controllers 00:21:16.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:16.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:16.785 Initialization complete. Launching workers. 00:21:16.785 ======================================================== 00:21:16.785 Latency(us) 00:21:16.785 Device Information : IOPS MiB/s Average min max 00:21:16.785 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32263.69 4997.82 63852.95 00:21:16.785 ======================================================== 00:21:16.785 Total : 129.00 16.12 32263.69 4997.82 63852.95 00:21:16.785 00:21:16.785 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:16.785 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:16.785 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.785 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.046 rmmod nvme_tcp 00:21:17.046 rmmod nvme_fabrics 00:21:17.046 rmmod nvme_keyring 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1971312 ']' 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1971312 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1971312 ']' 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1971312 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1971312 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1971312' 00:21:17.046 killing process with pid 1971312 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1971312 00:21:17.046 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1971312 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.307 11:58:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.219 11:58:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.219 00:21:19.219 real 0m13.000s 00:21:19.219 user 0m5.279s 00:21:19.219 sys 0m6.306s 00:21:19.219 11:58:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:19.219 11:58:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.219 ************************************ 00:21:19.219 END TEST nvmf_wait_for_buf 00:21:19.219 ************************************ 00:21:19.480 11:58:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:19.480 11:58:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:19.480 11:58:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:19.480 11:58:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:19.480 11:58:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.480 11:58:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:27.712 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:27.712 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.712 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:27.713 Found net devices under 0000:31:00.0: cvl_0_0 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:27.713 Found net devices under 0000:31:00.1: cvl_0_1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:27.713 ************************************ 00:21:27.713 START TEST nvmf_perf_adq 00:21:27.713 ************************************ 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:27.713 * Looking for test storage... 00:21:27.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:27.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.713 --rc genhtml_branch_coverage=1 00:21:27.713 --rc genhtml_function_coverage=1 00:21:27.713 --rc genhtml_legend=1 00:21:27.713 --rc geninfo_all_blocks=1 00:21:27.713 --rc geninfo_unexecuted_blocks=1 00:21:27.713 00:21:27.713 ' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:27.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.713 --rc genhtml_branch_coverage=1 00:21:27.713 --rc genhtml_function_coverage=1 00:21:27.713 --rc genhtml_legend=1 00:21:27.713 --rc geninfo_all_blocks=1 00:21:27.713 --rc geninfo_unexecuted_blocks=1 00:21:27.713 00:21:27.713 ' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:27.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.713 --rc genhtml_branch_coverage=1 00:21:27.713 --rc genhtml_function_coverage=1 00:21:27.713 --rc genhtml_legend=1 00:21:27.713 --rc geninfo_all_blocks=1 00:21:27.713 --rc geninfo_unexecuted_blocks=1 00:21:27.713 00:21:27.713 ' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:27.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.713 --rc genhtml_branch_coverage=1 00:21:27.713 --rc genhtml_function_coverage=1 00:21:27.713 --rc genhtml_legend=1 00:21:27.713 --rc geninfo_all_blocks=1 00:21:27.713 --rc geninfo_unexecuted_blocks=1 00:21:27.713 00:21:27.713 ' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.713 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.714 11:58:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:34.400 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:34.400 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:34.400 Found net devices under 0000:31:00.0: cvl_0_0 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:34.400 Found net devices under 0000:31:00.1: cvl_0_1 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:34.400 11:58:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:35.784 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:38.329 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:43.667 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:43.667 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:43.667 Found net devices under 0000:31:00.0: cvl_0_0 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:43.667 Found net devices under 0000:31:00.1: cvl_0_1 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:43.667 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:43.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:21:43.668 00:21:43.668 --- 10.0.0.2 ping statistics --- 00:21:43.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.668 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:21:43.668 00:21:43.668 --- 10.0.0.1 ping statistics --- 00:21:43.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.668 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:43.668 11:58:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1981795 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1981795 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1981795 ']' 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.668 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.668 [2024-10-11 11:58:46.096404] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:43.668 [2024-10-11 11:58:46.096468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.668 [2024-10-11 11:58:46.186431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:43.668 [2024-10-11 11:58:46.241628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.668 [2024-10-11 11:58:46.241679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.668 [2024-10-11 11:58:46.241688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.668 [2024-10-11 11:58:46.241696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.668 [2024-10-11 11:58:46.241702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.668 [2024-10-11 11:58:46.244129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.668 [2024-10-11 11:58:46.244363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.668 [2024-10-11 11:58:46.244406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.668 [2024-10-11 11:58:46.244193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.240 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.240 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:44.240 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:44.240 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.240 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.501 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:44.501 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:44.501 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:44.501 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.501 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 11:58:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 [2024-10-11 11:58:47.120876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 Malloc1 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.501 [2024-10-11 11:58:47.199581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.501 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.762 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1982028 00:21:44.762 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:44.762 11:58:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:46.680 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:46.680 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.680 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.680 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.680 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:46.680 "tick_rate": 2400000000, 00:21:46.680 "poll_groups": [ 00:21:46.680 { 00:21:46.680 "name": "nvmf_tgt_poll_group_000", 00:21:46.680 "admin_qpairs": 1, 00:21:46.680 "io_qpairs": 1, 00:21:46.680 "current_admin_qpairs": 1, 00:21:46.680 "current_io_qpairs": 1, 00:21:46.680 "pending_bdev_io": 0, 00:21:46.680 "completed_nvme_io": 15325, 00:21:46.680 "transports": [ 00:21:46.680 { 00:21:46.680 "trtype": "TCP" 00:21:46.680 } 00:21:46.680 ] 00:21:46.680 }, 00:21:46.680 { 00:21:46.680 "name": "nvmf_tgt_poll_group_001", 00:21:46.680 "admin_qpairs": 0, 00:21:46.680 "io_qpairs": 1, 00:21:46.680 "current_admin_qpairs": 0, 00:21:46.680 "current_io_qpairs": 1, 00:21:46.680 "pending_bdev_io": 0, 00:21:46.680 "completed_nvme_io": 15590, 00:21:46.680 "transports": [ 00:21:46.680 { 00:21:46.680 "trtype": "TCP" 00:21:46.680 } 00:21:46.680 ] 00:21:46.680 }, 00:21:46.680 { 00:21:46.680 "name": "nvmf_tgt_poll_group_002", 00:21:46.680 "admin_qpairs": 0, 00:21:46.680 "io_qpairs": 1, 00:21:46.680 "current_admin_qpairs": 0, 00:21:46.680 "current_io_qpairs": 1, 00:21:46.680 "pending_bdev_io": 0, 00:21:46.680 "completed_nvme_io": 15844, 00:21:46.680 "transports": [ 00:21:46.680 { 00:21:46.680 "trtype": "TCP" 00:21:46.680 } 00:21:46.680 ] 00:21:46.680 }, 00:21:46.680 { 00:21:46.680 "name": "nvmf_tgt_poll_group_003", 00:21:46.680 "admin_qpairs": 0, 00:21:46.680 "io_qpairs": 1, 00:21:46.680 "current_admin_qpairs": 0, 00:21:46.680 "current_io_qpairs": 1, 00:21:46.680 "pending_bdev_io": 0, 00:21:46.680 "completed_nvme_io": 15537, 00:21:46.681 "transports": [ 00:21:46.681 { 00:21:46.681 "trtype": "TCP" 00:21:46.681 } 00:21:46.681 ] 00:21:46.681 } 00:21:46.681 ] 00:21:46.681 }' 00:21:46.681 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:46.681 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:46.681 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:46.681 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:46.681 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1982028 00:21:54.816 Initializing NVMe Controllers 00:21:54.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:54.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:54.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:54.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:54.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:54.817 Initialization complete. Launching workers. 00:21:54.817 ======================================================== 00:21:54.817 Latency(us) 00:21:54.817 Device Information : IOPS MiB/s Average min max 00:21:54.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12880.60 50.31 4968.55 1234.80 13381.81 00:21:54.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12988.40 50.74 4928.03 1221.66 13520.68 00:21:54.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12489.50 48.79 5124.29 1450.80 13992.99 00:21:54.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12264.60 47.91 5218.29 1210.98 13124.22 00:21:54.817 ======================================================== 00:21:54.817 Total : 50623.10 197.75 5057.08 1210.98 13992.99 00:21:54.817 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.817 rmmod nvme_tcp 00:21:54.817 rmmod nvme_fabrics 00:21:54.817 rmmod nvme_keyring 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1981795 ']' 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1981795 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1981795 ']' 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1981795 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1981795 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1981795' 00:21:54.817 killing process with pid 1981795 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1981795 00:21:54.817 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1981795 00:21:55.077 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:55.077 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.078 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.622 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.622 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:57.622 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:57.622 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:59.007 11:59:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:01.567 11:59:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.860 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:06.861 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:06.861 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:06.861 Found net devices under 0000:31:00.0: cvl_0_0 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:06.861 Found net devices under 0000:31:00.1: cvl_0_1 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:06.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:22:06.861 00:22:06.861 --- 10.0.0.2 ping statistics --- 00:22:06.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.861 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:22:06.861 00:22:06.861 --- 10.0.0.1 ping statistics --- 00:22:06.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.861 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:06.861 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:06.861 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:06.861 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:06.861 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:06.861 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:06.861 net.core.busy_poll = 1 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:06.862 net.core.busy_read = 1 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1986746 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1986746 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1986746 ']' 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.862 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.862 [2024-10-11 11:59:09.382663] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:06.862 [2024-10-11 11:59:09.382734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.862 [2024-10-11 11:59:09.475599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.862 [2024-10-11 11:59:09.528967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.862 [2024-10-11 11:59:09.529020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.862 [2024-10-11 11:59:09.529030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.862 [2024-10-11 11:59:09.529037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.862 [2024-10-11 11:59:09.529043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.862 [2024-10-11 11:59:09.531123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.862 [2024-10-11 11:59:09.531299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.862 [2024-10-11 11:59:09.531502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.862 [2024-10-11 11:59:09.531503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 [2024-10-11 11:59:10.403888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 Malloc1 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.806 [2024-10-11 11:59:10.479322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1986852 00:22:07.806 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:07.807 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:10.355 "tick_rate": 2400000000, 00:22:10.355 "poll_groups": [ 00:22:10.355 { 00:22:10.355 "name": "nvmf_tgt_poll_group_000", 00:22:10.355 "admin_qpairs": 1, 00:22:10.355 "io_qpairs": 1, 00:22:10.355 "current_admin_qpairs": 1, 00:22:10.355 "current_io_qpairs": 1, 00:22:10.355 "pending_bdev_io": 0, 00:22:10.355 "completed_nvme_io": 23413, 00:22:10.355 "transports": [ 00:22:10.355 { 00:22:10.355 "trtype": "TCP" 00:22:10.355 } 00:22:10.355 ] 00:22:10.355 }, 00:22:10.355 { 00:22:10.355 "name": "nvmf_tgt_poll_group_001", 00:22:10.355 "admin_qpairs": 0, 00:22:10.355 "io_qpairs": 3, 00:22:10.355 "current_admin_qpairs": 0, 00:22:10.355 "current_io_qpairs": 3, 00:22:10.355 "pending_bdev_io": 0, 00:22:10.355 "completed_nvme_io": 28821, 00:22:10.355 "transports": [ 00:22:10.355 { 00:22:10.355 "trtype": "TCP" 00:22:10.355 } 00:22:10.355 ] 00:22:10.355 }, 00:22:10.355 { 00:22:10.355 "name": "nvmf_tgt_poll_group_002", 00:22:10.355 "admin_qpairs": 0, 00:22:10.355 "io_qpairs": 0, 00:22:10.355 "current_admin_qpairs": 0, 00:22:10.355 "current_io_qpairs": 0, 00:22:10.355 "pending_bdev_io": 0, 00:22:10.355 "completed_nvme_io": 0, 00:22:10.355 "transports": [ 00:22:10.355 { 00:22:10.355 "trtype": "TCP" 00:22:10.355 } 00:22:10.355 ] 00:22:10.355 }, 00:22:10.355 { 00:22:10.355 "name": "nvmf_tgt_poll_group_003", 00:22:10.355 "admin_qpairs": 0, 00:22:10.355 "io_qpairs": 0, 00:22:10.355 "current_admin_qpairs": 0, 00:22:10.355 "current_io_qpairs": 0, 00:22:10.355 "pending_bdev_io": 0, 00:22:10.355 "completed_nvme_io": 0, 00:22:10.355 "transports": [ 00:22:10.355 { 00:22:10.355 "trtype": "TCP" 00:22:10.355 } 00:22:10.355 ] 00:22:10.355 } 00:22:10.355 ] 00:22:10.355 }' 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:10.355 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1986852 00:22:18.490 Initializing NVMe Controllers 00:22:18.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:18.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:18.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:18.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:18.490 Initialization complete. Launching workers. 00:22:18.490 ======================================================== 00:22:18.490 Latency(us) 00:22:18.490 Device Information : IOPS MiB/s Average min max 00:22:18.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7403.70 28.92 8645.40 1039.15 55786.55 00:22:18.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7405.00 28.93 8642.85 1075.39 58595.55 00:22:18.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 16625.30 64.94 3859.91 948.45 45115.00 00:22:18.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5501.40 21.49 11634.05 995.75 62349.07 00:22:18.490 ======================================================== 00:22:18.490 Total : 36935.39 144.28 6936.00 948.45 62349.07 00:22:18.490 00:22:18.490 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:18.490 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:18.490 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:18.490 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.490 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:18.490 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.490 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.490 rmmod nvme_tcp 00:22:18.490 rmmod nvme_fabrics 00:22:18.490 rmmod nvme_keyring 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1986746 ']' 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1986746 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1986746 ']' 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1986746 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1986746 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1986746' 00:22:18.491 killing process with pid 1986746 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1986746 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1986746 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.491 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:21.792 00:22:21.792 real 0m54.718s 00:22:21.792 user 2m49.575s 00:22:21.792 sys 0m11.959s 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.792 ************************************ 00:22:21.792 END TEST nvmf_perf_adq 00:22:21.792 ************************************ 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:21.792 ************************************ 00:22:21.792 START TEST nvmf_shutdown 00:22:21.792 ************************************ 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:21.792 * Looking for test storage... 00:22:21.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:21.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.792 --rc genhtml_branch_coverage=1 00:22:21.792 --rc genhtml_function_coverage=1 00:22:21.792 --rc genhtml_legend=1 00:22:21.792 --rc geninfo_all_blocks=1 00:22:21.792 --rc geninfo_unexecuted_blocks=1 00:22:21.792 00:22:21.792 ' 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:21.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.792 --rc genhtml_branch_coverage=1 00:22:21.792 --rc genhtml_function_coverage=1 00:22:21.792 --rc genhtml_legend=1 00:22:21.792 --rc geninfo_all_blocks=1 00:22:21.792 --rc geninfo_unexecuted_blocks=1 00:22:21.792 00:22:21.792 ' 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:21.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.792 --rc genhtml_branch_coverage=1 00:22:21.792 --rc genhtml_function_coverage=1 00:22:21.792 --rc genhtml_legend=1 00:22:21.792 --rc geninfo_all_blocks=1 00:22:21.792 --rc geninfo_unexecuted_blocks=1 00:22:21.792 00:22:21.792 ' 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:21.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.792 --rc genhtml_branch_coverage=1 00:22:21.792 --rc genhtml_function_coverage=1 00:22:21.792 --rc genhtml_legend=1 00:22:21.792 --rc geninfo_all_blocks=1 00:22:21.792 --rc geninfo_unexecuted_blocks=1 00:22:21.792 00:22:21.792 ' 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.792 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:21.793 ************************************ 00:22:21.793 START TEST nvmf_shutdown_tc1 00:22:21.793 ************************************ 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.793 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.938 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:29.939 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:29.939 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:29.939 Found net devices under 0000:31:00.0: cvl_0_0 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:29.939 Found net devices under 0000:31:00.1: cvl_0_1 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.939 11:59:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:22:29.939 00:22:29.939 --- 10.0.0.2 ping statistics --- 00:22:29.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.939 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:22:29.939 00:22:29.939 --- 10.0.0.1 ping statistics --- 00:22:29.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.939 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1993552 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1993552 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1993552 ']' 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.939 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.940 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.940 [2024-10-11 11:59:32.269688] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:29.940 [2024-10-11 11:59:32.269757] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.940 [2024-10-11 11:59:32.360840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.940 [2024-10-11 11:59:32.416149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.940 [2024-10-11 11:59:32.416199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.940 [2024-10-11 11:59:32.416208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.940 [2024-10-11 11:59:32.416215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.940 [2024-10-11 11:59:32.416222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.940 [2024-10-11 11:59:32.418137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.940 [2024-10-11 11:59:32.418367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.940 [2024-10-11 11:59:32.418688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:29.940 [2024-10-11 11:59:32.418691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.511 [2024-10-11 11:59:33.146506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.511 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.512 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.512 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.512 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.512 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.512 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:30.773 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:30.773 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.773 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.773 Malloc1 00:22:30.773 [2024-10-11 11:59:33.273430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.773 Malloc2 00:22:30.773 Malloc3 00:22:30.773 Malloc4 00:22:30.773 Malloc5 00:22:31.034 Malloc6 00:22:31.034 Malloc7 00:22:31.034 Malloc8 00:22:31.034 Malloc9 00:22:31.034 Malloc10 00:22:31.034 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.034 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:31.034 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.034 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1993820 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1993820 /var/tmp/bdevperf.sock 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1993820 ']' 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.296 { 00:22:31.296 "params": { 00:22:31.296 "name": "Nvme$subsystem", 00:22:31.296 "trtype": "$TEST_TRANSPORT", 00:22:31.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.296 "adrfam": "ipv4", 00:22:31.296 "trsvcid": "$NVMF_PORT", 00:22:31.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.296 "hdgst": ${hdgst:-false}, 00:22:31.296 "ddgst": ${ddgst:-false} 00:22:31.296 }, 00:22:31.296 "method": "bdev_nvme_attach_controller" 00:22:31.296 } 00:22:31.296 EOF 00:22:31.296 )") 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.296 { 00:22:31.296 "params": { 00:22:31.296 "name": "Nvme$subsystem", 00:22:31.296 "trtype": "$TEST_TRANSPORT", 00:22:31.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.296 "adrfam": "ipv4", 00:22:31.296 "trsvcid": "$NVMF_PORT", 00:22:31.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.296 "hdgst": ${hdgst:-false}, 00:22:31.296 "ddgst": ${ddgst:-false} 00:22:31.296 }, 00:22:31.296 "method": "bdev_nvme_attach_controller" 00:22:31.296 } 00:22:31.296 EOF 00:22:31.296 )") 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.296 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.296 { 00:22:31.296 "params": { 00:22:31.296 "name": "Nvme$subsystem", 00:22:31.296 "trtype": "$TEST_TRANSPORT", 00:22:31.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.296 "adrfam": "ipv4", 00:22:31.296 "trsvcid": "$NVMF_PORT", 00:22:31.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.296 "hdgst": ${hdgst:-false}, 00:22:31.296 "ddgst": ${ddgst:-false} 00:22:31.296 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 } 00:22:31.297 EOF 00:22:31.297 )") 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.297 { 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme$subsystem", 00:22:31.297 "trtype": "$TEST_TRANSPORT", 00:22:31.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "$NVMF_PORT", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.297 "hdgst": ${hdgst:-false}, 00:22:31.297 "ddgst": ${ddgst:-false} 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 } 00:22:31.297 EOF 00:22:31.297 )") 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.297 { 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme$subsystem", 00:22:31.297 "trtype": "$TEST_TRANSPORT", 00:22:31.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "$NVMF_PORT", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.297 "hdgst": ${hdgst:-false}, 00:22:31.297 "ddgst": ${ddgst:-false} 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 } 00:22:31.297 EOF 00:22:31.297 )") 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.297 { 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme$subsystem", 00:22:31.297 "trtype": "$TEST_TRANSPORT", 00:22:31.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "$NVMF_PORT", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.297 "hdgst": ${hdgst:-false}, 00:22:31.297 "ddgst": ${ddgst:-false} 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 } 00:22:31.297 EOF 00:22:31.297 )") 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.297 [2024-10-11 11:59:33.793006] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:31.297 [2024-10-11 11:59:33.793099] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.297 { 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme$subsystem", 00:22:31.297 "trtype": "$TEST_TRANSPORT", 00:22:31.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "$NVMF_PORT", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.297 "hdgst": ${hdgst:-false}, 00:22:31.297 "ddgst": ${ddgst:-false} 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 } 00:22:31.297 EOF 00:22:31.297 )") 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.297 { 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme$subsystem", 00:22:31.297 "trtype": "$TEST_TRANSPORT", 00:22:31.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "$NVMF_PORT", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.297 "hdgst": ${hdgst:-false}, 00:22:31.297 "ddgst": ${ddgst:-false} 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 } 00:22:31.297 EOF 00:22:31.297 )") 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.297 { 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme$subsystem", 00:22:31.297 "trtype": "$TEST_TRANSPORT", 00:22:31.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "$NVMF_PORT", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.297 "hdgst": ${hdgst:-false}, 00:22:31.297 "ddgst": ${ddgst:-false} 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 } 00:22:31.297 EOF 00:22:31.297 )") 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:31.297 { 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme$subsystem", 00:22:31.297 "trtype": "$TEST_TRANSPORT", 00:22:31.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "$NVMF_PORT", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.297 "hdgst": ${hdgst:-false}, 00:22:31.297 "ddgst": ${ddgst:-false} 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 } 00:22:31.297 EOF 00:22:31.297 )") 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:31.297 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme1", 00:22:31.297 "trtype": "tcp", 00:22:31.297 "traddr": "10.0.0.2", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "4420", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.297 "hdgst": false, 00:22:31.297 "ddgst": false 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 },{ 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme2", 00:22:31.297 "trtype": "tcp", 00:22:31.297 "traddr": "10.0.0.2", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "4420", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.297 "hdgst": false, 00:22:31.297 "ddgst": false 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 },{ 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme3", 00:22:31.297 "trtype": "tcp", 00:22:31.297 "traddr": "10.0.0.2", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "4420", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:31.297 "hdgst": false, 00:22:31.297 "ddgst": false 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 },{ 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme4", 00:22:31.297 "trtype": "tcp", 00:22:31.297 "traddr": "10.0.0.2", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "4420", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:31.297 "hdgst": false, 00:22:31.297 "ddgst": false 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 },{ 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme5", 00:22:31.297 "trtype": "tcp", 00:22:31.297 "traddr": "10.0.0.2", 00:22:31.297 "adrfam": "ipv4", 00:22:31.297 "trsvcid": "4420", 00:22:31.297 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:31.297 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:31.297 "hdgst": false, 00:22:31.297 "ddgst": false 00:22:31.297 }, 00:22:31.297 "method": "bdev_nvme_attach_controller" 00:22:31.297 },{ 00:22:31.297 "params": { 00:22:31.297 "name": "Nvme6", 00:22:31.298 "trtype": "tcp", 00:22:31.298 "traddr": "10.0.0.2", 00:22:31.298 "adrfam": "ipv4", 00:22:31.298 "trsvcid": "4420", 00:22:31.298 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:31.298 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:31.298 "hdgst": false, 00:22:31.298 "ddgst": false 00:22:31.298 }, 00:22:31.298 "method": "bdev_nvme_attach_controller" 00:22:31.298 },{ 00:22:31.298 "params": { 00:22:31.298 "name": "Nvme7", 00:22:31.298 "trtype": "tcp", 00:22:31.298 "traddr": "10.0.0.2", 00:22:31.298 "adrfam": "ipv4", 00:22:31.298 "trsvcid": "4420", 00:22:31.298 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:31.298 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:31.298 "hdgst": false, 00:22:31.298 "ddgst": false 00:22:31.298 }, 00:22:31.298 "method": "bdev_nvme_attach_controller" 00:22:31.298 },{ 00:22:31.298 "params": { 00:22:31.298 "name": "Nvme8", 00:22:31.298 "trtype": "tcp", 00:22:31.298 "traddr": "10.0.0.2", 00:22:31.298 "adrfam": "ipv4", 00:22:31.298 "trsvcid": "4420", 00:22:31.298 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:31.298 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:31.298 "hdgst": false, 00:22:31.298 "ddgst": false 00:22:31.298 }, 00:22:31.298 "method": "bdev_nvme_attach_controller" 00:22:31.298 },{ 00:22:31.298 "params": { 00:22:31.298 "name": "Nvme9", 00:22:31.298 "trtype": "tcp", 00:22:31.298 "traddr": "10.0.0.2", 00:22:31.298 "adrfam": "ipv4", 00:22:31.298 "trsvcid": "4420", 00:22:31.298 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:31.298 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:31.298 "hdgst": false, 00:22:31.298 "ddgst": false 00:22:31.298 }, 00:22:31.298 "method": "bdev_nvme_attach_controller" 00:22:31.298 },{ 00:22:31.298 "params": { 00:22:31.298 "name": "Nvme10", 00:22:31.298 "trtype": "tcp", 00:22:31.298 "traddr": "10.0.0.2", 00:22:31.298 "adrfam": "ipv4", 00:22:31.298 "trsvcid": "4420", 00:22:31.298 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:31.298 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:31.298 "hdgst": false, 00:22:31.298 "ddgst": false 00:22:31.298 }, 00:22:31.298 "method": "bdev_nvme_attach_controller" 00:22:31.298 }' 00:22:31.298 [2024-10-11 11:59:33.878259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.298 [2024-10-11 11:59:33.932229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1993820 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:32.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1993820 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:32.684 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1993552 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.626 { 00:22:33.626 "params": { 00:22:33.626 "name": "Nvme$subsystem", 00:22:33.626 "trtype": "$TEST_TRANSPORT", 00:22:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.626 "adrfam": "ipv4", 00:22:33.626 "trsvcid": "$NVMF_PORT", 00:22:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.626 "hdgst": ${hdgst:-false}, 00:22:33.626 "ddgst": ${ddgst:-false} 00:22:33.626 }, 00:22:33.626 "method": "bdev_nvme_attach_controller" 00:22:33.626 } 00:22:33.626 EOF 00:22:33.626 )") 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.626 { 00:22:33.626 "params": { 00:22:33.626 "name": "Nvme$subsystem", 00:22:33.626 "trtype": "$TEST_TRANSPORT", 00:22:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.626 "adrfam": "ipv4", 00:22:33.626 "trsvcid": "$NVMF_PORT", 00:22:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.626 "hdgst": ${hdgst:-false}, 00:22:33.626 "ddgst": ${ddgst:-false} 00:22:33.626 }, 00:22:33.626 "method": "bdev_nvme_attach_controller" 00:22:33.626 } 00:22:33.626 EOF 00:22:33.626 )") 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.626 { 00:22:33.626 "params": { 00:22:33.626 "name": "Nvme$subsystem", 00:22:33.626 "trtype": "$TEST_TRANSPORT", 00:22:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.626 "adrfam": "ipv4", 00:22:33.626 "trsvcid": "$NVMF_PORT", 00:22:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.626 "hdgst": ${hdgst:-false}, 00:22:33.626 "ddgst": ${ddgst:-false} 00:22:33.626 }, 00:22:33.626 "method": "bdev_nvme_attach_controller" 00:22:33.626 } 00:22:33.626 EOF 00:22:33.626 )") 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.626 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.626 { 00:22:33.626 "params": { 00:22:33.626 "name": "Nvme$subsystem", 00:22:33.626 "trtype": "$TEST_TRANSPORT", 00:22:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.626 "adrfam": "ipv4", 00:22:33.626 "trsvcid": "$NVMF_PORT", 00:22:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.626 "hdgst": ${hdgst:-false}, 00:22:33.626 "ddgst": ${ddgst:-false} 00:22:33.626 }, 00:22:33.627 "method": "bdev_nvme_attach_controller" 00:22:33.627 } 00:22:33.627 EOF 00:22:33.627 )") 00:22:33.627 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.888 { 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme$subsystem", 00:22:33.888 "trtype": "$TEST_TRANSPORT", 00:22:33.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "$NVMF_PORT", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.888 "hdgst": ${hdgst:-false}, 00:22:33.888 "ddgst": ${ddgst:-false} 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 } 00:22:33.888 EOF 00:22:33.888 )") 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.888 { 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme$subsystem", 00:22:33.888 "trtype": "$TEST_TRANSPORT", 00:22:33.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "$NVMF_PORT", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.888 "hdgst": ${hdgst:-false}, 00:22:33.888 "ddgst": ${ddgst:-false} 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 } 00:22:33.888 EOF 00:22:33.888 )") 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.888 { 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme$subsystem", 00:22:33.888 "trtype": "$TEST_TRANSPORT", 00:22:33.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "$NVMF_PORT", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.888 "hdgst": ${hdgst:-false}, 00:22:33.888 "ddgst": ${ddgst:-false} 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 } 00:22:33.888 EOF 00:22:33.888 )") 00:22:33.888 [2024-10-11 11:59:36.348058] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:33.888 [2024-10-11 11:59:36.348118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994456 ] 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.888 { 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme$subsystem", 00:22:33.888 "trtype": "$TEST_TRANSPORT", 00:22:33.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "$NVMF_PORT", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.888 "hdgst": ${hdgst:-false}, 00:22:33.888 "ddgst": ${ddgst:-false} 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 } 00:22:33.888 EOF 00:22:33.888 )") 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.888 { 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme$subsystem", 00:22:33.888 "trtype": "$TEST_TRANSPORT", 00:22:33.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "$NVMF_PORT", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.888 "hdgst": ${hdgst:-false}, 00:22:33.888 "ddgst": ${ddgst:-false} 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 } 00:22:33.888 EOF 00:22:33.888 )") 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.888 { 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme$subsystem", 00:22:33.888 "trtype": "$TEST_TRANSPORT", 00:22:33.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "$NVMF_PORT", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.888 "hdgst": ${hdgst:-false}, 00:22:33.888 "ddgst": ${ddgst:-false} 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 } 00:22:33.888 EOF 00:22:33.888 )") 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:33.888 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme1", 00:22:33.888 "trtype": "tcp", 00:22:33.888 "traddr": "10.0.0.2", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "4420", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.888 "hdgst": false, 00:22:33.888 "ddgst": false 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 },{ 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme2", 00:22:33.888 "trtype": "tcp", 00:22:33.888 "traddr": "10.0.0.2", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "4420", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:33.888 "hdgst": false, 00:22:33.888 "ddgst": false 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 },{ 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme3", 00:22:33.888 "trtype": "tcp", 00:22:33.888 "traddr": "10.0.0.2", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "4420", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:33.888 "hdgst": false, 00:22:33.888 "ddgst": false 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 },{ 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme4", 00:22:33.888 "trtype": "tcp", 00:22:33.888 "traddr": "10.0.0.2", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "4420", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:33.888 "hdgst": false, 00:22:33.888 "ddgst": false 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 },{ 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme5", 00:22:33.888 "trtype": "tcp", 00:22:33.888 "traddr": "10.0.0.2", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "4420", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:33.888 "hdgst": false, 00:22:33.888 "ddgst": false 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 },{ 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme6", 00:22:33.888 "trtype": "tcp", 00:22:33.888 "traddr": "10.0.0.2", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "4420", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:33.888 "hdgst": false, 00:22:33.888 "ddgst": false 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 },{ 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme7", 00:22:33.888 "trtype": "tcp", 00:22:33.888 "traddr": "10.0.0.2", 00:22:33.888 "adrfam": "ipv4", 00:22:33.888 "trsvcid": "4420", 00:22:33.888 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:33.888 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:33.888 "hdgst": false, 00:22:33.888 "ddgst": false 00:22:33.888 }, 00:22:33.888 "method": "bdev_nvme_attach_controller" 00:22:33.888 },{ 00:22:33.888 "params": { 00:22:33.888 "name": "Nvme8", 00:22:33.888 "trtype": "tcp", 00:22:33.888 "traddr": "10.0.0.2", 00:22:33.888 "adrfam": "ipv4", 00:22:33.889 "trsvcid": "4420", 00:22:33.889 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:33.889 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:33.889 "hdgst": false, 00:22:33.889 "ddgst": false 00:22:33.889 }, 00:22:33.889 "method": "bdev_nvme_attach_controller" 00:22:33.889 },{ 00:22:33.889 "params": { 00:22:33.889 "name": "Nvme9", 00:22:33.889 "trtype": "tcp", 00:22:33.889 "traddr": "10.0.0.2", 00:22:33.889 "adrfam": "ipv4", 00:22:33.889 "trsvcid": "4420", 00:22:33.889 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:33.889 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:33.889 "hdgst": false, 00:22:33.889 "ddgst": false 00:22:33.889 }, 00:22:33.889 "method": "bdev_nvme_attach_controller" 00:22:33.889 },{ 00:22:33.889 "params": { 00:22:33.889 "name": "Nvme10", 00:22:33.889 "trtype": "tcp", 00:22:33.889 "traddr": "10.0.0.2", 00:22:33.889 "adrfam": "ipv4", 00:22:33.889 "trsvcid": "4420", 00:22:33.889 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:33.889 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:33.889 "hdgst": false, 00:22:33.889 "ddgst": false 00:22:33.889 }, 00:22:33.889 "method": "bdev_nvme_attach_controller" 00:22:33.889 }' 00:22:33.889 [2024-10-11 11:59:36.428084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.889 [2024-10-11 11:59:36.463959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.274 Running I/O for 1 seconds... 00:22:36.215 1850.00 IOPS, 115.62 MiB/s 00:22:36.215 Latency(us) 00:22:36.215 [2024-10-11T09:59:38.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.215 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme1n1 : 1.11 230.54 14.41 0.00 0.00 274829.44 19770.03 260396.37 00:22:36.215 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme2n1 : 1.18 217.51 13.59 0.00 0.00 285112.75 21626.88 253405.87 00:22:36.215 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme3n1 : 1.11 231.60 14.48 0.00 0.00 263808.00 36044.80 256901.12 00:22:36.215 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme4n1 : 1.13 227.30 14.21 0.00 0.00 263871.15 11960.32 251658.24 00:22:36.215 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme5n1 : 1.19 215.43 13.46 0.00 0.00 274572.37 19442.35 274377.39 00:22:36.215 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme6n1 : 1.12 228.32 14.27 0.00 0.00 253174.19 18896.21 228939.09 00:22:36.215 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme7n1 : 1.18 270.28 16.89 0.00 0.00 211217.07 11468.80 251658.24 00:22:36.215 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme8n1 : 1.19 271.77 16.99 0.00 0.00 206296.81 1556.48 246415.36 00:22:36.215 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme9n1 : 1.20 265.77 16.61 0.00 0.00 207305.81 11741.87 255153.49 00:22:36.215 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.215 Verification LBA range: start 0x0 length 0x400 00:22:36.215 Nvme10n1 : 1.20 267.13 16.70 0.00 0.00 202370.82 12561.07 272629.76 00:22:36.215 [2024-10-11T09:59:38.918Z] =================================================================================================================== 00:22:36.215 [2024-10-11T09:59:38.918Z] Total : 2425.67 151.60 0.00 0.00 240801.54 1556.48 274377.39 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:36.476 rmmod nvme_tcp 00:22:36.476 rmmod nvme_fabrics 00:22:36.476 rmmod nvme_keyring 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1993552 ']' 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1993552 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1993552 ']' 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1993552 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1993552 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1993552' 00:22:36.476 killing process with pid 1993552 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1993552 00:22:36.476 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1993552 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.813 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.360 00:22:39.360 real 0m17.067s 00:22:39.360 user 0m33.835s 00:22:39.360 sys 0m7.148s 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 ************************************ 00:22:39.360 END TEST nvmf_shutdown_tc1 00:22:39.360 ************************************ 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 ************************************ 00:22:39.360 START TEST nvmf_shutdown_tc2 00:22:39.360 ************************************ 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.360 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:39.361 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:39.361 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:39.361 Found net devices under 0000:31:00.0: cvl_0_0 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:39.361 Found net devices under 0000:31:00.1: cvl_0_1 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.361 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:22:39.361 00:22:39.361 --- 10.0.0.2 ping statistics --- 00:22:39.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.362 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:22:39.362 00:22:39.362 --- 10.0.0.1 ping statistics --- 00:22:39.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.362 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1995571 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1995571 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1995571 ']' 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.362 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:39.362 [2024-10-11 11:59:41.994792] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:39.362 [2024-10-11 11:59:41.994858] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.622 [2024-10-11 11:59:42.083068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.623 [2024-10-11 11:59:42.118274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.623 [2024-10-11 11:59:42.118301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.623 [2024-10-11 11:59:42.118307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.623 [2024-10-11 11:59:42.118312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.623 [2024-10-11 11:59:42.118317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.623 [2024-10-11 11:59:42.119842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.623 [2024-10-11 11:59:42.119994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.623 [2024-10-11 11:59:42.120127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.623 [2024-10-11 11:59:42.120322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.194 [2024-10-11 11:59:42.831113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.194 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.455 Malloc1 00:22:40.455 [2024-10-11 11:59:42.939999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.455 Malloc2 00:22:40.455 Malloc3 00:22:40.455 Malloc4 00:22:40.455 Malloc5 00:22:40.455 Malloc6 00:22:40.455 Malloc7 00:22:40.715 Malloc8 00:22:40.715 Malloc9 00:22:40.715 Malloc10 00:22:40.715 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.715 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:40.715 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.715 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.715 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1995959 00:22:40.715 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1995959 /var/tmp/bdevperf.sock 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1995959 ']' 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 [2024-10-11 11:59:43.387253] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:40.716 [2024-10-11 11:59:43.387310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995959 ] 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.716 "hdgst": ${hdgst:-false}, 00:22:40.716 "ddgst": ${ddgst:-false} 00:22:40.716 }, 00:22:40.716 "method": "bdev_nvme_attach_controller" 00:22:40.716 } 00:22:40.716 EOF 00:22:40.716 )") 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:40.716 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:40.716 { 00:22:40.716 "params": { 00:22:40.716 "name": "Nvme$subsystem", 00:22:40.716 "trtype": "$TEST_TRANSPORT", 00:22:40.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.716 "adrfam": "ipv4", 00:22:40.716 "trsvcid": "$NVMF_PORT", 00:22:40.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.717 "hdgst": ${hdgst:-false}, 00:22:40.717 "ddgst": ${ddgst:-false} 00:22:40.717 }, 00:22:40.717 "method": "bdev_nvme_attach_controller" 00:22:40.717 } 00:22:40.717 EOF 00:22:40.717 )") 00:22:40.717 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:40.986 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:40.986 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:40.986 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:40.986 "params": { 00:22:40.986 "name": "Nvme1", 00:22:40.986 "trtype": "tcp", 00:22:40.986 "traddr": "10.0.0.2", 00:22:40.986 "adrfam": "ipv4", 00:22:40.986 "trsvcid": "4420", 00:22:40.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.986 "hdgst": false, 00:22:40.986 "ddgst": false 00:22:40.986 }, 00:22:40.986 "method": "bdev_nvme_attach_controller" 00:22:40.986 },{ 00:22:40.986 "params": { 00:22:40.986 "name": "Nvme2", 00:22:40.986 "trtype": "tcp", 00:22:40.986 "traddr": "10.0.0.2", 00:22:40.986 "adrfam": "ipv4", 00:22:40.986 "trsvcid": "4420", 00:22:40.987 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:40.987 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:40.987 "hdgst": false, 00:22:40.987 "ddgst": false 00:22:40.987 }, 00:22:40.987 "method": "bdev_nvme_attach_controller" 00:22:40.987 },{ 00:22:40.987 "params": { 00:22:40.987 "name": "Nvme3", 00:22:40.987 "trtype": "tcp", 00:22:40.987 "traddr": "10.0.0.2", 00:22:40.987 "adrfam": "ipv4", 00:22:40.987 "trsvcid": "4420", 00:22:40.987 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:40.987 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:40.987 "hdgst": false, 00:22:40.987 "ddgst": false 00:22:40.987 }, 00:22:40.987 "method": "bdev_nvme_attach_controller" 00:22:40.987 },{ 00:22:40.987 "params": { 00:22:40.987 "name": "Nvme4", 00:22:40.987 "trtype": "tcp", 00:22:40.987 "traddr": "10.0.0.2", 00:22:40.987 "adrfam": "ipv4", 00:22:40.987 "trsvcid": "4420", 00:22:40.987 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:40.987 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:40.987 "hdgst": false, 00:22:40.987 "ddgst": false 00:22:40.987 }, 00:22:40.987 "method": "bdev_nvme_attach_controller" 00:22:40.987 },{ 00:22:40.987 "params": { 00:22:40.987 "name": "Nvme5", 00:22:40.987 "trtype": "tcp", 00:22:40.987 "traddr": "10.0.0.2", 00:22:40.987 "adrfam": "ipv4", 00:22:40.987 "trsvcid": "4420", 00:22:40.987 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:40.987 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:40.987 "hdgst": false, 00:22:40.987 "ddgst": false 00:22:40.988 }, 00:22:40.988 "method": "bdev_nvme_attach_controller" 00:22:40.988 },{ 00:22:40.988 "params": { 00:22:40.988 "name": "Nvme6", 00:22:40.988 "trtype": "tcp", 00:22:40.988 "traddr": "10.0.0.2", 00:22:40.988 "adrfam": "ipv4", 00:22:40.988 "trsvcid": "4420", 00:22:40.988 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:40.988 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:40.988 "hdgst": false, 00:22:40.988 "ddgst": false 00:22:40.988 }, 00:22:40.988 "method": "bdev_nvme_attach_controller" 00:22:40.988 },{ 00:22:40.988 "params": { 00:22:40.988 "name": "Nvme7", 00:22:40.988 "trtype": "tcp", 00:22:40.988 "traddr": "10.0.0.2", 00:22:40.988 "adrfam": "ipv4", 00:22:40.988 "trsvcid": "4420", 00:22:40.988 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:40.988 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:40.988 "hdgst": false, 00:22:40.988 "ddgst": false 00:22:40.988 }, 00:22:40.988 "method": "bdev_nvme_attach_controller" 00:22:40.988 },{ 00:22:40.988 "params": { 00:22:40.988 "name": "Nvme8", 00:22:40.988 "trtype": "tcp", 00:22:40.988 "traddr": "10.0.0.2", 00:22:40.988 "adrfam": "ipv4", 00:22:40.988 "trsvcid": "4420", 00:22:40.988 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:40.988 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:40.988 "hdgst": false, 00:22:40.988 "ddgst": false 00:22:40.988 }, 00:22:40.988 "method": "bdev_nvme_attach_controller" 00:22:40.988 },{ 00:22:40.989 "params": { 00:22:40.989 "name": "Nvme9", 00:22:40.989 "trtype": "tcp", 00:22:40.989 "traddr": "10.0.0.2", 00:22:40.989 "adrfam": "ipv4", 00:22:40.989 "trsvcid": "4420", 00:22:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:40.989 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:40.989 "hdgst": false, 00:22:40.989 "ddgst": false 00:22:40.989 }, 00:22:40.989 "method": "bdev_nvme_attach_controller" 00:22:40.989 },{ 00:22:40.989 "params": { 00:22:40.989 "name": "Nvme10", 00:22:40.989 "trtype": "tcp", 00:22:40.989 "traddr": "10.0.0.2", 00:22:40.989 "adrfam": "ipv4", 00:22:40.989 "trsvcid": "4420", 00:22:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:40.989 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:40.989 "hdgst": false, 00:22:40.989 "ddgst": false 00:22:40.989 }, 00:22:40.989 "method": "bdev_nvme_attach_controller" 00:22:40.989 }' 00:22:40.991 [2024-10-11 11:59:43.467389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.991 [2024-10-11 11:59:43.504191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.376 Running I/O for 10 seconds... 00:22:42.376 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.376 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:42.376 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:42.376 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.376 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.376 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.636 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:42.636 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:42.636 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:42.896 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:42.896 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:42.896 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:42.896 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:42.896 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.896 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.896 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.897 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:42.897 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:42.897 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1995959 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1995959 ']' 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1995959 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1995959 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1995959' 00:22:43.157 killing process with pid 1995959 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1995959 00:22:43.157 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1995959 00:22:43.157 Received shutdown signal, test time was about 0.988959 seconds 00:22:43.157 00:22:43.157 Latency(us) 00:22:43.157 [2024-10-11T09:59:45.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.157 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.157 Verification LBA range: start 0x0 length 0x400 00:22:43.157 Nvme1n1 : 0.96 200.28 12.52 0.00 0.00 315522.28 17039.36 270882.13 00:22:43.157 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.157 Verification LBA range: start 0x0 length 0x400 00:22:43.157 Nvme2n1 : 0.98 262.17 16.39 0.00 0.00 236573.01 19005.44 241172.48 00:22:43.157 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.157 Verification LBA range: start 0x0 length 0x400 00:22:43.157 Nvme3n1 : 0.97 263.78 16.49 0.00 0.00 230325.01 11359.57 251658.24 00:22:43.157 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.157 Verification LBA range: start 0x0 length 0x400 00:22:43.158 Nvme4n1 : 0.96 265.42 16.59 0.00 0.00 223995.95 13544.11 244667.73 00:22:43.158 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.158 Verification LBA range: start 0x0 length 0x400 00:22:43.158 Nvme5n1 : 0.98 260.67 16.29 0.00 0.00 223760.00 14964.05 253405.87 00:22:43.158 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.158 Verification LBA range: start 0x0 length 0x400 00:22:43.158 Nvme6n1 : 0.96 200.52 12.53 0.00 0.00 283496.39 15400.96 244667.73 00:22:43.158 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.158 Verification LBA range: start 0x0 length 0x400 00:22:43.158 Nvme7n1 : 0.98 262.92 16.43 0.00 0.00 211779.51 3877.55 241172.48 00:22:43.158 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.158 Verification LBA range: start 0x0 length 0x400 00:22:43.158 Nvme8n1 : 0.98 265.31 16.58 0.00 0.00 205243.13 4396.37 227191.47 00:22:43.158 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.158 Verification LBA range: start 0x0 length 0x400 00:22:43.158 Nvme9n1 : 0.99 259.09 16.19 0.00 0.00 206334.29 14745.60 246415.36 00:22:43.158 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.158 Verification LBA range: start 0x0 length 0x400 00:22:43.158 Nvme10n1 : 0.97 198.41 12.40 0.00 0.00 262198.04 19770.03 265639.25 00:22:43.158 [2024-10-11T09:59:45.861Z] =================================================================================================================== 00:22:43.158 [2024-10-11T09:59:45.861Z] Total : 2438.56 152.41 0.00 0.00 236024.60 3877.55 270882.13 00:22:43.418 11:59:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1995571 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.359 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.359 rmmod nvme_tcp 00:22:44.359 rmmod nvme_fabrics 00:22:44.359 rmmod nvme_keyring 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1995571 ']' 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1995571 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1995571 ']' 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1995571 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.359 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1995571 00:22:44.619 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:44.619 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:44.619 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1995571' 00:22:44.619 killing process with pid 1995571 00:22:44.619 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1995571 00:22:44.619 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1995571 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.880 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.791 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.791 00:22:46.791 real 0m7.875s 00:22:46.791 user 0m23.665s 00:22:46.791 sys 0m1.303s 00:22:46.791 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.791 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:46.791 ************************************ 00:22:46.791 END TEST nvmf_shutdown_tc2 00:22:46.791 ************************************ 00:22:46.791 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:46.791 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:46.791 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.791 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:47.052 ************************************ 00:22:47.052 START TEST nvmf_shutdown_tc3 00:22:47.052 ************************************ 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:47.052 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:47.053 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:47.053 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:47.053 Found net devices under 0000:31:00.0: cvl_0_0 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:47.053 Found net devices under 0000:31:00.1: cvl_0_1 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:47.053 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:47.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:22:47.314 00:22:47.314 --- 10.0.0.2 ping statistics --- 00:22:47.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.314 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:47.314 00:22:47.314 --- 10.0.0.1 ping statistics --- 00:22:47.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.314 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1997314 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1997314 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1997314 ']' 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.314 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.315 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.315 11:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:47.315 [2024-10-11 11:59:49.994151] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:47.315 [2024-10-11 11:59:49.994220] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.575 [2024-10-11 11:59:50.090628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.575 [2024-10-11 11:59:50.129478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.575 [2024-10-11 11:59:50.129512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.575 [2024-10-11 11:59:50.129518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.575 [2024-10-11 11:59:50.129523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.575 [2024-10-11 11:59:50.129527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.575 [2024-10-11 11:59:50.130927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.575 [2024-10-11 11:59:50.131109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.575 [2024-10-11 11:59:50.131256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.575 [2024-10-11 11:59:50.131256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:48.145 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.145 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:48.145 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.146 [2024-10-11 11:59:50.838632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.146 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.405 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.406 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.406 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:48.406 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:48.406 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:48.406 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.406 11:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.406 Malloc1 00:22:48.406 [2024-10-11 11:59:50.945848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.406 Malloc2 00:22:48.406 Malloc3 00:22:48.406 Malloc4 00:22:48.406 Malloc5 00:22:48.667 Malloc6 00:22:48.667 Malloc7 00:22:48.667 Malloc8 00:22:48.667 Malloc9 00:22:48.667 Malloc10 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1997537 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1997537 /var/tmp/bdevperf.sock 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1997537 ']' 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.667 { 00:22:48.667 "params": { 00:22:48.667 "name": "Nvme$subsystem", 00:22:48.667 "trtype": "$TEST_TRANSPORT", 00:22:48.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.667 "adrfam": "ipv4", 00:22:48.667 "trsvcid": "$NVMF_PORT", 00:22:48.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.667 "hdgst": ${hdgst:-false}, 00:22:48.667 "ddgst": ${ddgst:-false} 00:22:48.667 }, 00:22:48.667 "method": "bdev_nvme_attach_controller" 00:22:48.667 } 00:22:48.667 EOF 00:22:48.667 )") 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.667 { 00:22:48.667 "params": { 00:22:48.667 "name": "Nvme$subsystem", 00:22:48.667 "trtype": "$TEST_TRANSPORT", 00:22:48.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.667 "adrfam": "ipv4", 00:22:48.667 "trsvcid": "$NVMF_PORT", 00:22:48.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.667 "hdgst": ${hdgst:-false}, 00:22:48.667 "ddgst": ${ddgst:-false} 00:22:48.667 }, 00:22:48.667 "method": "bdev_nvme_attach_controller" 00:22:48.667 } 00:22:48.667 EOF 00:22:48.667 )") 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.667 { 00:22:48.667 "params": { 00:22:48.667 "name": "Nvme$subsystem", 00:22:48.667 "trtype": "$TEST_TRANSPORT", 00:22:48.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.667 "adrfam": "ipv4", 00:22:48.667 "trsvcid": "$NVMF_PORT", 00:22:48.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.667 "hdgst": ${hdgst:-false}, 00:22:48.667 "ddgst": ${ddgst:-false} 00:22:48.667 }, 00:22:48.667 "method": "bdev_nvme_attach_controller" 00:22:48.667 } 00:22:48.667 EOF 00:22:48.667 )") 00:22:48.667 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.929 { 00:22:48.929 "params": { 00:22:48.929 "name": "Nvme$subsystem", 00:22:48.929 "trtype": "$TEST_TRANSPORT", 00:22:48.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.929 "adrfam": "ipv4", 00:22:48.929 "trsvcid": "$NVMF_PORT", 00:22:48.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.929 "hdgst": ${hdgst:-false}, 00:22:48.929 "ddgst": ${ddgst:-false} 00:22:48.929 }, 00:22:48.929 "method": "bdev_nvme_attach_controller" 00:22:48.929 } 00:22:48.929 EOF 00:22:48.929 )") 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.929 { 00:22:48.929 "params": { 00:22:48.929 "name": "Nvme$subsystem", 00:22:48.929 "trtype": "$TEST_TRANSPORT", 00:22:48.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.929 "adrfam": "ipv4", 00:22:48.929 "trsvcid": "$NVMF_PORT", 00:22:48.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.929 "hdgst": ${hdgst:-false}, 00:22:48.929 "ddgst": ${ddgst:-false} 00:22:48.929 }, 00:22:48.929 "method": "bdev_nvme_attach_controller" 00:22:48.929 } 00:22:48.929 EOF 00:22:48.929 )") 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.929 { 00:22:48.929 "params": { 00:22:48.929 "name": "Nvme$subsystem", 00:22:48.929 "trtype": "$TEST_TRANSPORT", 00:22:48.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.929 "adrfam": "ipv4", 00:22:48.929 "trsvcid": "$NVMF_PORT", 00:22:48.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.929 "hdgst": ${hdgst:-false}, 00:22:48.929 "ddgst": ${ddgst:-false} 00:22:48.929 }, 00:22:48.929 "method": "bdev_nvme_attach_controller" 00:22:48.929 } 00:22:48.929 EOF 00:22:48.929 )") 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.929 [2024-10-11 11:59:51.396351] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:48.929 [2024-10-11 11:59:51.396406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997537 ] 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.929 { 00:22:48.929 "params": { 00:22:48.929 "name": "Nvme$subsystem", 00:22:48.929 "trtype": "$TEST_TRANSPORT", 00:22:48.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.929 "adrfam": "ipv4", 00:22:48.929 "trsvcid": "$NVMF_PORT", 00:22:48.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.929 "hdgst": ${hdgst:-false}, 00:22:48.929 "ddgst": ${ddgst:-false} 00:22:48.929 }, 00:22:48.929 "method": "bdev_nvme_attach_controller" 00:22:48.929 } 00:22:48.929 EOF 00:22:48.929 )") 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.929 { 00:22:48.929 "params": { 00:22:48.929 "name": "Nvme$subsystem", 00:22:48.929 "trtype": "$TEST_TRANSPORT", 00:22:48.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.929 "adrfam": "ipv4", 00:22:48.929 "trsvcid": "$NVMF_PORT", 00:22:48.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.929 "hdgst": ${hdgst:-false}, 00:22:48.929 "ddgst": ${ddgst:-false} 00:22:48.929 }, 00:22:48.929 "method": "bdev_nvme_attach_controller" 00:22:48.929 } 00:22:48.929 EOF 00:22:48.929 )") 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.929 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.929 { 00:22:48.929 "params": { 00:22:48.929 "name": "Nvme$subsystem", 00:22:48.929 "trtype": "$TEST_TRANSPORT", 00:22:48.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "$NVMF_PORT", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.930 "hdgst": ${hdgst:-false}, 00:22:48.930 "ddgst": ${ddgst:-false} 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 } 00:22:48.930 EOF 00:22:48.930 )") 00:22:48.930 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.930 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:48.930 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:48.930 { 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme$subsystem", 00:22:48.930 "trtype": "$TEST_TRANSPORT", 00:22:48.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "$NVMF_PORT", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.930 "hdgst": ${hdgst:-false}, 00:22:48.930 "ddgst": ${ddgst:-false} 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 } 00:22:48.930 EOF 00:22:48.930 )") 00:22:48.930 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:48.930 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:48.930 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:48.930 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme1", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme2", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme3", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme4", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme5", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme6", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme7", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme8", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme9", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 },{ 00:22:48.930 "params": { 00:22:48.930 "name": "Nvme10", 00:22:48.930 "trtype": "tcp", 00:22:48.930 "traddr": "10.0.0.2", 00:22:48.930 "adrfam": "ipv4", 00:22:48.930 "trsvcid": "4420", 00:22:48.930 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:48.930 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:48.930 "hdgst": false, 00:22:48.930 "ddgst": false 00:22:48.930 }, 00:22:48.930 "method": "bdev_nvme_attach_controller" 00:22:48.930 }' 00:22:48.930 [2024-10-11 11:59:51.476200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.930 [2024-10-11 11:59:51.512926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.314 Running I/O for 10 seconds... 00:22:50.314 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.314 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:50.314 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:50.314 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.314 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:50.574 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:50.835 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1997314 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1997314 ']' 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1997314 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.098 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1997314 00:22:51.372 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:51.372 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:51.372 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1997314' 00:22:51.372 killing process with pid 1997314 00:22:51.372 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1997314 00:22:51.372 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1997314 00:22:51.372 [2024-10-11 11:59:53.827145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeacd90 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-10-11 11:59:53.828264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.372 he state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.372 [2024-10-11 11:59:53.828286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.372 [2024-10-11 11:59:53.828305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.372 [2024-10-11 11:59:53.828317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.372 [2024-10-11 11:59:53.828327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-11 11:59:53.828332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.372 he state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.372 [2024-10-11 11:59:53.828342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-10-11 11:59:53.828344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.372 he state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-11 11:59:53.828371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 he state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-11 11:59:53.828395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 he state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with t[2024-10-11 11:59:53.828451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:51.373 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with t[2024-10-11 11:59:53.828463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128he state(6) to be set 00:22:51.373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-11 11:59:53.828492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 he state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf7d0 is same with the state(6) to be set 00:22:51.373 [2024-10-11 11:59:53.828539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.373 [2024-10-11 11:59:53.828776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.373 [2024-10-11 11:59:53.828783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.828984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.828992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead260 is same with the state(6) to be set 00:22:51.374 [2024-10-11 11:59:53.829375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead260 is same with the state(6) to be set 00:22:51.374 [2024-10-11 11:59:53.829382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.374 [2024-10-11 11:59:53.829399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.374 [2024-10-11 11:59:53.829457] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b994b0 was disconnected and freed. reset controller. 00:22:51.374 [2024-10-11 11:59:53.830378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.374 [2024-10-11 11:59:53.830400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.374 [2024-10-11 11:59:53.830406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.374 [2024-10-11 11:59:53.830410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.374 [2024-10-11 11:59:53.830416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.374 [2024-10-11 11:59:53.830421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.374 [2024-10-11 11:59:53.830426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-10-11 11:59:53.830685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 he state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead730 is same with t[2024-10-11 11:59:53.830697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:51.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ac470 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afb020 is same with the state(6) to be set 00:22:51.375 [2024-10-11 11:59:53.830885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.375 [2024-10-11 11:59:53.830903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.375 [2024-10-11 11:59:53.830911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.376 [2024-10-11 11:59:53.830919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.376 [2024-10-11 11:59:53.830926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.376 [2024-10-11 11:59:53.830934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.376 [2024-10-11 11:59:53.830942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.376 [2024-10-11 11:59:53.830952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.830975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.376 [2024-10-11 11:59:53.830984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.376 [2024-10-11 11:59:53.830992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.376 [2024-10-11 11:59:53.830999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.376 [2024-10-11 11:59:53.831007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.376 [2024-10-11 11:59:53.831015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.376 [2024-10-11 11:59:53.831023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.376 [2024-10-11 11:59:53.831030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.376 [2024-10-11 11:59:53.831038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a1700 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.831998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.376 [2024-10-11 11:59:53.832052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.377 [2024-10-11 11:59:53.832056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.377 [2024-10-11 11:59:53.832061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.377 [2024-10-11 11:59:53.832069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeadc20 is same with the state(6) to be set 00:22:51.377 [2024-10-11 11:59:53.832354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae0f0 is same with the state(6) to be set 00:22:51.377 [2024-10-11 11:59:53.832556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae0f0 is same with the state(6) to be set 00:22:51.377 [2024-10-11 11:59:53.832568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae470 is same with the state(6) to be set 00:22:51.377 [2024-10-11 11:59:53.832849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.832992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.832999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.833009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.833016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.833025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.833032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.377 [2024-10-11 11:59:53.833041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.377 [2024-10-11 11:59:53.833048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae940 is same with t[2024-10-11 11:59:53.833263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:12he state(6) to be set 00:22:51.378 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.378 [2024-10-11 11:59:53.833489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.378 [2024-10-11 11:59:53.833538] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b9a7c0 was disconnected and freed. reset controller. 00:22:51.378 [2024-10-11 11:59:53.833827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.378 [2024-10-11 11:59:53.833940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.833997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf300 is same with the state(6) to be set 00:22:51.379 [2024-10-11 11:59:53.834272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.834551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.834561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.850202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.850261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.850273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.850285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.850294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.379 [2024-10-11 11:59:53.850305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.379 [2024-10-11 11:59:53.850314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.850990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.850999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.380 [2024-10-11 11:59:53.851006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.380 [2024-10-11 11:59:53.851016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.381 [2024-10-11 11:59:53.851023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.381 [2024-10-11 11:59:53.851039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.381 [2024-10-11 11:59:53.851055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:51.381 [2024-10-11 11:59:53.851178] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab0270 was disconnected and freed. reset controller. 00:22:51.381 [2024-10-11 11:59:53.851360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:51.381 [2024-10-11 11:59:53.851403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ac470 (9): Bad file descriptor 00:22:51.381 [2024-10-11 11:59:53.851459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b045c0 is same with the state(6) to be set 00:22:51.381 [2024-10-11 11:59:53.851553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afb020 (9): Bad file descriptor 00:22:51.381 [2024-10-11 11:59:53.851584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad60d0 is same with the state(6) to be set 00:22:51.381 [2024-10-11 11:59:53.851670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2f20 (9): Bad file descriptor 00:22:51.381 [2024-10-11 11:59:53.851683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a1700 (9): Bad file descriptor 00:22:51.381 [2024-10-11 11:59:53.851710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afde60 is same with the state(6) to be set 00:22:51.381 [2024-10-11 11:59:53.851798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2610 is same with the state(6) to be set 00:22:51.381 [2024-10-11 11:59:53.851885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afcde0 is same with the state(6) to be set 00:22:51.381 [2024-10-11 11:59:53.851970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.851987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.851994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.852002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.852009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.852017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.381 [2024-10-11 11:59:53.852024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.852030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad63f0 is same with the state(6) to be set 00:22:51.381 [2024-10-11 11:59:53.853422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.381 [2024-10-11 11:59:53.853442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.381 [2024-10-11 11:59:53.853456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.381 [2024-10-11 11:59:53.853465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.853989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.853996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.382 [2024-10-11 11:59:53.854206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.382 [2024-10-11 11:59:53.854215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.854384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.854391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.859942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.859980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.859994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.860192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.860270] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1aad7c0 was disconnected and freed. reset controller. 00:22:51.383 [2024-10-11 11:59:53.862034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:51.383 [2024-10-11 11:59:53.862086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:51.383 [2024-10-11 11:59:53.862105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b045c0 (9): Bad file descriptor 00:22:51.383 [2024-10-11 11:59:53.862180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad60d0 (9): Bad file descriptor 00:22:51.383 [2024-10-11 11:59:53.862215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afde60 (9): Bad file descriptor 00:22:51.383 [2024-10-11 11:59:53.862231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c2610 (9): Bad file descriptor 00:22:51.383 [2024-10-11 11:59:53.862253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afcde0 (9): Bad file descriptor 00:22:51.383 [2024-10-11 11:59:53.862269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad63f0 (9): Bad file descriptor 00:22:51.383 [2024-10-11 11:59:53.866046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:51.383 [2024-10-11 11:59:53.866590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.383 [2024-10-11 11:59:53.866641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ac470 with addr=10.0.0.2, port=4420 00:22:51.383 [2024-10-11 11:59:53.866658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ac470 is same with the state(6) to be set 00:22:51.383 [2024-10-11 11:59:53.866886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.383 [2024-10-11 11:59:53.866902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a2f20 with addr=10.0.0.2, port=4420 00:22:51.383 [2024-10-11 11:59:53.866913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f20 is same with the state(6) to be set 00:22:51.383 [2024-10-11 11:59:53.867395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.383 [2024-10-11 11:59:53.867677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.383 [2024-10-11 11:59:53.867687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.867982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.867993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.384 [2024-10-11 11:59:53.868625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.384 [2024-10-11 11:59:53.868635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.868908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.868918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.870956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.870982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aaecf0 is same with the state(6) to be set 00:22:51.385 [2024-10-11 11:59:53.871212] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1aaecf0 was disconnected and freed. reset controller. 00:22:51.385 [2024-10-11 11:59:53.871640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab17f0 is same with the state(6) to be set 00:22:51.385 [2024-10-11 11:59:53.871814] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab17f0 was disconnected and freed. reset controller. 00:22:51.385 [2024-10-11 11:59:53.871911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.871978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.871992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.872002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.872015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.872030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.872043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.872053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.385 [2024-10-11 11:59:53.872075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.385 [2024-10-11 11:59:53.872086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.872982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.872992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.873005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.873016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.386 [2024-10-11 11:59:53.873029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.386 [2024-10-11 11:59:53.873040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.873439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.873449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.875214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:51.387 [2024-10-11 11:59:53.875248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:51.387 [2024-10-11 11:59:53.875617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.387 [2024-10-11 11:59:53.875637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b045c0 with addr=10.0.0.2, port=4420 00:22:51.387 [2024-10-11 11:59:53.875648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b045c0 is same with the state(6) to be set 00:22:51.387 [2024-10-11 11:59:53.875965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.387 [2024-10-11 11:59:53.875979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad63f0 with addr=10.0.0.2, port=4420 00:22:51.387 [2024-10-11 11:59:53.875989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad63f0 is same with the state(6) to be set 00:22:51.387 [2024-10-11 11:59:53.876008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ac470 (9): Bad file descriptor 00:22:51.387 [2024-10-11 11:59:53.876023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2f20 (9): Bad file descriptor 00:22:51.387 [2024-10-11 11:59:53.876084] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.387 [2024-10-11 11:59:53.876101] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.387 [2024-10-11 11:59:53.876121] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.387 [2024-10-11 11:59:53.876136] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.387 [2024-10-11 11:59:53.876232] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.387 [2024-10-11 11:59:53.879044] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:51.387 [2024-10-11 11:59:53.879093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:51.387 [2024-10-11 11:59:53.879108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:51.387 [2024-10-11 11:59:53.879477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.387 [2024-10-11 11:59:53.879520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a1700 with addr=10.0.0.2, port=4420 00:22:51.387 [2024-10-11 11:59:53.879533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a1700 is same with the state(6) to be set 00:22:51.387 [2024-10-11 11:59:53.879745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.387 [2024-10-11 11:59:53.879758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afb020 with addr=10.0.0.2, port=4420 00:22:51.387 [2024-10-11 11:59:53.879767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afb020 is same with the state(6) to be set 00:22:51.387 [2024-10-11 11:59:53.879781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b045c0 (9): Bad file descriptor 00:22:51.387 [2024-10-11 11:59:53.879795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad63f0 (9): Bad file descriptor 00:22:51.387 [2024-10-11 11:59:53.879805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.387 [2024-10-11 11:59:53.879814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:51.387 [2024-10-11 11:59:53.879824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.387 [2024-10-11 11:59:53.879842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:51.387 [2024-10-11 11:59:53.879851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:51.387 [2024-10-11 11:59:53.879858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:51.387 [2024-10-11 11:59:53.879878] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.387 [2024-10-11 11:59:53.879894] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.387 [2024-10-11 11:59:53.880268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.880284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.880301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.880315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.880327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.880335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.880346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.880354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.880365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.880373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.880384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.880392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.387 [2024-10-11 11:59:53.880403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.387 [2024-10-11 11:59:53.880411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.880985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.880995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.881014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.881034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.881054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.881078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.881097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.881116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.881135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.388 [2024-10-11 11:59:53.881155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.388 [2024-10-11 11:59:53.881164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.881509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.881518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aac290 is same with the state(6) to be set 00:22:51.389 [2024-10-11 11:59:53.883027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.389 [2024-10-11 11:59:53.883478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.389 [2024-10-11 11:59:53.883487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.883983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.883991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.884003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.884011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.884023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.884033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.884044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.884052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.884067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.390 [2024-10-11 11:59:53.884076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.390 [2024-10-11 11:59:53.884087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.391 [2024-10-11 11:59:53.884294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.391 [2024-10-11 11:59:53.884304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab2c30 is same with the state(6) to be set 00:22:51.391 [2024-10-11 11:59:53.886316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.391 [2024-10-11 11:59:53.886338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.391 [2024-10-11 11:59:53.886348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:51.391 task offset: 31616 on job bdev=Nvme1n1 fails 00:22:51.391 00:22:51.391 Latency(us) 00:22:51.391 [2024-10-11T09:59:54.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.391 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme1n1 ended in about 0.93 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme1n1 : 0.93 206.72 12.92 68.91 0.00 229278.19 4560.21 244667.73 00:22:51.391 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme2n1 ended in about 0.95 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme2n1 : 0.95 202.17 12.64 67.39 0.00 229743.36 22500.69 241172.48 00:22:51.391 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme3n1 : 0.97 202.77 12.67 66.21 0.00 225724.68 17585.49 246415.36 00:22:51.391 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme4n1 ended in about 0.98 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme4n1 : 0.98 196.10 12.26 65.37 0.00 227689.60 16820.91 244667.73 00:22:51.391 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme5n1 ended in about 0.96 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme5n1 : 0.96 199.92 12.49 66.64 0.00 218247.47 20534.61 239424.85 00:22:51.391 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme6n1 ended in about 0.97 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme6n1 : 0.97 197.15 12.32 7.19 0.00 275656.55 17476.27 269134.51 00:22:51.391 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme7n1 ended in about 0.96 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme7n1 : 0.96 200.42 12.53 66.81 0.00 208185.60 19333.12 242920.11 00:22:51.391 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme8n1 ended in about 0.98 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme8n1 : 0.98 196.88 12.30 5.13 0.00 266369.05 12124.16 256901.12 00:22:51.391 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme9n1 : 0.98 130.36 8.15 65.18 0.00 272925.30 15947.09 253405.87 00:22:51.391 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.391 Job: Nvme10n1 ended in about 0.97 seconds with error 00:22:51.391 Verification LBA range: start 0x0 length 0x400 00:22:51.391 Nvme10n1 : 0.97 131.80 8.24 65.90 0.00 263195.02 19333.12 267386.88 00:22:51.391 [2024-10-11T09:59:54.094Z] =================================================================================================================== 00:22:51.391 [2024-10-11T09:59:54.094Z] Total : 1864.29 116.52 544.72 0.00 238758.10 4560.21 269134.51 00:22:51.391 [2024-10-11 11:59:53.911272] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:51.391 [2024-10-11 11:59:53.911305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:51.391 [2024-10-11 11:59:53.911728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.391 [2024-10-11 11:59:53.911744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c2610 with addr=10.0.0.2, port=4420 00:22:51.391 [2024-10-11 11:59:53.911753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c2610 is same with the state(6) to be set 00:22:51.391 [2024-10-11 11:59:53.912077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.391 [2024-10-11 11:59:53.912088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afcde0 with addr=10.0.0.2, port=4420 00:22:51.391 [2024-10-11 11:59:53.912095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afcde0 is same with the state(6) to be set 00:22:51.391 [2024-10-11 11:59:53.912108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a1700 (9): Bad file descriptor 00:22:51.391 [2024-10-11 11:59:53.912120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afb020 (9): Bad file descriptor 00:22:51.391 [2024-10-11 11:59:53.912129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:51.391 [2024-10-11 11:59:53.912136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:51.391 [2024-10-11 11:59:53.912144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:51.391 [2024-10-11 11:59:53.912161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:51.391 [2024-10-11 11:59:53.912168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:51.391 [2024-10-11 11:59:53.912175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:51.391 [2024-10-11 11:59:53.912219] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.391 [2024-10-11 11:59:53.912234] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.391 [2024-10-11 11:59:53.912844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.391 [2024-10-11 11:59:53.912854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.391 [2024-10-11 11:59:53.913238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.391 [2024-10-11 11:59:53.913252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad60d0 with addr=10.0.0.2, port=4420 00:22:51.391 [2024-10-11 11:59:53.913259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad60d0 is same with the state(6) to be set 00:22:51.391 [2024-10-11 11:59:53.913662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.391 [2024-10-11 11:59:53.913672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afde60 with addr=10.0.0.2, port=4420 00:22:51.391 [2024-10-11 11:59:53.913679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afde60 is same with the state(6) to be set 00:22:51.391 [2024-10-11 11:59:53.913688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c2610 (9): Bad file descriptor 00:22:51.391 [2024-10-11 11:59:53.913698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afcde0 (9): Bad file descriptor 00:22:51.391 [2024-10-11 11:59:53.913711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:51.391 [2024-10-11 11:59:53.913717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:51.391 [2024-10-11 11:59:53.913724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:51.391 [2024-10-11 11:59:53.913736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:51.391 [2024-10-11 11:59:53.913743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:51.391 [2024-10-11 11:59:53.913750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:51.391 [2024-10-11 11:59:53.913774] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.391 [2024-10-11 11:59:53.913785] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.391 [2024-10-11 11:59:53.913803] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.391 [2024-10-11 11:59:53.913814] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.391 [2024-10-11 11:59:53.913826] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.392 [2024-10-11 11:59:53.913836] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.392 [2024-10-11 11:59:53.914405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:51.392 [2024-10-11 11:59:53.914417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:51.392 [2024-10-11 11:59:53.914439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.914447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.914475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad60d0 (9): Bad file descriptor 00:22:51.392 [2024-10-11 11:59:53.914485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afde60 (9): Bad file descriptor 00:22:51.392 [2024-10-11 11:59:53.914493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:51.392 [2024-10-11 11:59:53.914500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:51.392 [2024-10-11 11:59:53.914507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:51.392 [2024-10-11 11:59:53.914517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:51.392 [2024-10-11 11:59:53.914523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:51.392 [2024-10-11 11:59:53.914530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:51.392 [2024-10-11 11:59:53.914586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:51.392 [2024-10-11 11:59:53.914596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:51.392 [2024-10-11 11:59:53.914604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.914610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.914808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.392 [2024-10-11 11:59:53.914820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a2f20 with addr=10.0.0.2, port=4420 00:22:51.392 [2024-10-11 11:59:53.914828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f20 is same with the state(6) to be set 00:22:51.392 [2024-10-11 11:59:53.915143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.392 [2024-10-11 11:59:53.915154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ac470 with addr=10.0.0.2, port=4420 00:22:51.392 [2024-10-11 11:59:53.915161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ac470 is same with the state(6) to be set 00:22:51.392 [2024-10-11 11:59:53.915169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:51.392 [2024-10-11 11:59:53.915175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:51.392 [2024-10-11 11:59:53.915182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:51.392 [2024-10-11 11:59:53.915192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:51.392 [2024-10-11 11:59:53.915198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:51.392 [2024-10-11 11:59:53.915205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:51.392 [2024-10-11 11:59:53.915246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.915254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.915607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.392 [2024-10-11 11:59:53.915617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad63f0 with addr=10.0.0.2, port=4420 00:22:51.392 [2024-10-11 11:59:53.915624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad63f0 is same with the state(6) to be set 00:22:51.392 [2024-10-11 11:59:53.915837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.392 [2024-10-11 11:59:53.915846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b045c0 with addr=10.0.0.2, port=4420 00:22:51.392 [2024-10-11 11:59:53.915854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b045c0 is same with the state(6) to be set 00:22:51.392 [2024-10-11 11:59:53.915863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2f20 (9): Bad file descriptor 00:22:51.392 [2024-10-11 11:59:53.915873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ac470 (9): Bad file descriptor 00:22:51.392 [2024-10-11 11:59:53.915901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad63f0 (9): Bad file descriptor 00:22:51.392 [2024-10-11 11:59:53.915911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b045c0 (9): Bad file descriptor 00:22:51.392 [2024-10-11 11:59:53.915919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:51.392 [2024-10-11 11:59:53.915926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:51.392 [2024-10-11 11:59:53.915933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:51.392 [2024-10-11 11:59:53.915942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.392 [2024-10-11 11:59:53.915949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:51.392 [2024-10-11 11:59:53.915956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.392 [2024-10-11 11:59:53.915985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.915992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.915999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:51.392 [2024-10-11 11:59:53.916008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:51.392 [2024-10-11 11:59:53.916015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:51.392 [2024-10-11 11:59:53.916024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:51.392 [2024-10-11 11:59:53.916031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:51.392 [2024-10-11 11:59:53.916037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:51.392 [2024-10-11 11:59:53.916069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.392 [2024-10-11 11:59:53.916076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.654 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1997537 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1997537 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1997537 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:52.598 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.599 rmmod nvme_tcp 00:22:52.599 rmmod nvme_fabrics 00:22:52.599 rmmod nvme_keyring 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1997314 ']' 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1997314 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1997314 ']' 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1997314 00:22:52.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1997314) - No such process 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1997314 is not found' 00:22:52.599 Process with pid 1997314 is not found 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.599 11:59:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:55.147 00:22:55.147 real 0m7.720s 00:22:55.147 user 0m18.618s 00:22:55.147 sys 0m1.254s 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.147 ************************************ 00:22:55.147 END TEST nvmf_shutdown_tc3 00:22:55.147 ************************************ 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:55.147 ************************************ 00:22:55.147 START TEST nvmf_shutdown_tc4 00:22:55.147 ************************************ 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:55.147 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.147 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:55.148 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:55.148 Found net devices under 0000:31:00.0: cvl_0_0 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:55.148 Found net devices under 0000:31:00.1: cvl_0_1 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:55.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:22:55.148 00:22:55.148 --- 10.0.0.2 ping statistics --- 00:22:55.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.148 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:22:55.148 00:22:55.148 --- 10.0.0.1 ping statistics --- 00:22:55.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.148 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1998939 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1998939 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1998939 ']' 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.148 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:55.148 [2024-10-11 11:59:57.786418] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:55.148 [2024-10-11 11:59:57.786476] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.409 [2024-10-11 11:59:57.876943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.409 [2024-10-11 11:59:57.929806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.409 [2024-10-11 11:59:57.929846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.409 [2024-10-11 11:59:57.929855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.409 [2024-10-11 11:59:57.929862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.409 [2024-10-11 11:59:57.929868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.409 [2024-10-11 11:59:57.932289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.409 [2024-10-11 11:59:57.932451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.409 [2024-10-11 11:59:57.932611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.409 [2024-10-11 11:59:57.932612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:55.979 [2024-10-11 11:59:58.636827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:55.979 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.240 11:59:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.240 Malloc1 00:22:56.240 [2024-10-11 11:59:58.753293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.240 Malloc2 00:22:56.240 Malloc3 00:22:56.240 Malloc4 00:22:56.240 Malloc5 00:22:56.240 Malloc6 00:22:56.500 Malloc7 00:22:56.500 Malloc8 00:22:56.500 Malloc9 00:22:56.500 Malloc10 00:22:56.500 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.500 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:56.500 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.500 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:56.500 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1999322 00:22:56.500 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:56.500 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:56.760 [2024-10-11 11:59:59.229939] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1998939 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1998939 ']' 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1998939 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1998939 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1998939' 00:23:02.053 killing process with pid 1998939 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1998939 00:23:02.053 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1998939 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 [2024-10-11 12:00:04.237278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.053 starting I/O failed: -6 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 [2024-10-11 12:00:04.238170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 [2024-10-11 12:00:04.238354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e8f0 is same with the state(6) to be set 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 [2024-10-11 12:00:04.238384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e8f0 is same with the state(6) to be set 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.053 Write completed with error (sct=0, sc=8) 00:23:02.053 starting I/O failed: -6 00:23:02.054 [2024-10-11 12:00:04.238664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4edc0 is same with Write completed with error (sct=0, sc=8) 00:23:02.054 the state(6) to be set 00:23:02.054 [2024-10-11 12:00:04.238697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4edc0 is same with the state(6) to be set 00:23:02.054 [2024-10-11 12:00:04.238703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4edc0 is same with the state(6) to be set 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 [2024-10-11 12:00:04.239030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31810 is same with the state(6) to be set 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 [2024-10-11 12:00:04.239053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31810 is same with the state(6) to be set 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 [2024-10-11 12:00:04.239059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31810 is same with the state(6) to be set 00:23:02.054 [2024-10-11 12:00:04.239069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31810 is same with the state(6) to be set 00:23:02.054 [2024-10-11 12:00:04.239074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31810 is same with the state(6) to be set 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 [2024-10-11 12:00:04.239108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 [2024-10-11 12:00:04.239244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e420 is same with the state(6) to be set 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 [2024-10-11 12:00:04.239271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e420 is same with the state(6) to be set 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 [2024-10-11 12:00:04.239280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e420 is same with the state(6) to be set 00:23:02.054 starting I/O failed: -6 00:23:02.054 [2024-10-11 12:00:04.239287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e420 is same with the state(6) to be set 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 [2024-10-11 12:00:04.239294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4e420 is same with the state(6) to be set 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 [2024-10-11 12:00:04.240743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.054 NVMe io qpair process completion error 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.054 starting I/O failed: -6 00:23:02.054 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 [2024-10-11 12:00:04.241570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32a00 is same with the state(6) to be set 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 [2024-10-11 12:00:04.241585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32a00 is same with the state(6) to be set 00:23:02.055 [2024-10-11 12:00:04.241590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32a00 is same with the state(6) to be set 00:23:02.055 [2024-10-11 12:00:04.241596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32a00 is same with the state(6) to be set 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 [2024-10-11 12:00:04.241601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32a00 is same with the state(6) to be set 00:23:02.055 starting I/O failed: -6 00:23:02.055 [2024-10-11 12:00:04.241606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32a00 is same with the state(6) to be set 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 [2024-10-11 12:00:04.241791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.055 [2024-10-11 12:00:04.241838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31b90 is same with the state(6) to be set 00:23:02.055 [2024-10-11 12:00:04.241858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31b90 is same with the state(6) to be set 00:23:02.055 [2024-10-11 12:00:04.241866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31b90 is same with the state(6) to be set 00:23:02.055 [2024-10-11 12:00:04.241872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31b90 is same with the state(6) to be set 00:23:02.055 [2024-10-11 12:00:04.241879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31b90 is same with the state(6) to be set 00:23:02.055 [2024-10-11 12:00:04.241886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31b90 is same with the state(6) to be set 00:23:02.055 [2024-10-11 12:00:04.241893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31b90 is same with the state(6) to be set 00:23:02.055 starting I/O failed: -6 00:23:02.055 starting I/O failed: -6 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 [2024-10-11 12:00:04.242736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 [2024-10-11 12:00:04.243650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.055 starting I/O failed: -6 00:23:02.055 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 [2024-10-11 12:00:04.245029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.056 NVMe io qpair process completion error 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 [2024-10-11 12:00:04.246281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 [2024-10-11 12:00:04.247095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.056 starting I/O failed: -6 00:23:02.056 starting I/O failed: -6 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.056 starting I/O failed: -6 00:23:02.056 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 [2024-10-11 12:00:04.248229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.057 starting I/O failed: -6 00:23:02.057 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 [2024-10-11 12:00:04.249664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.058 NVMe io qpair process completion error 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 [2024-10-11 12:00:04.251005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 [2024-10-11 12:00:04.251818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 [2024-10-11 12:00:04.252745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.058 Write completed with error (sct=0, sc=8) 00:23:02.058 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 [2024-10-11 12:00:04.255233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.059 NVMe io qpair process completion error 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 [2024-10-11 12:00:04.256244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 [2024-10-11 12:00:04.257078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.059 Write completed with error (sct=0, sc=8) 00:23:02.059 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 [2024-10-11 12:00:04.257991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 [2024-10-11 12:00:04.259650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.060 NVMe io qpair process completion error 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.060 starting I/O failed: -6 00:23:02.060 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 [2024-10-11 12:00:04.260766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 [2024-10-11 12:00:04.261672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 [2024-10-11 12:00:04.262590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.061 starting I/O failed: -6 00:23:02.061 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 [2024-10-11 12:00:04.264205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.062 NVMe io qpair process completion error 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 [2024-10-11 12:00:04.265369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 [2024-10-11 12:00:04.266274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.062 starting I/O failed: -6 00:23:02.062 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 [2024-10-11 12:00:04.267184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 [2024-10-11 12:00:04.269476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.063 NVMe io qpair process completion error 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.063 starting I/O failed: -6 00:23:02.063 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 [2024-10-11 12:00:04.270721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 [2024-10-11 12:00:04.271562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 [2024-10-11 12:00:04.272480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.064 starting I/O failed: -6 00:23:02.064 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 [2024-10-11 12:00:04.273913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.065 NVMe io qpair process completion error 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 [2024-10-11 12:00:04.275227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 [2024-10-11 12:00:04.276112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.065 Write completed with error (sct=0, sc=8) 00:23:02.065 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 [2024-10-11 12:00:04.277007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 [2024-10-11 12:00:04.279482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.066 NVMe io qpair process completion error 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 [2024-10-11 12:00:04.280598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 starting I/O failed: -6 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.066 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 [2024-10-11 12:00:04.281408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 [2024-10-11 12:00:04.282364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.067 starting I/O failed: -6 00:23:02.067 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 Write completed with error (sct=0, sc=8) 00:23:02.068 starting I/O failed: -6 00:23:02.068 [2024-10-11 12:00:04.285571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:02.068 NVMe io qpair process completion error 00:23:02.068 Initializing NVMe Controllers 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.068 Controller IO queue size 128, less than required. 00:23:02.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:02.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:02.068 Initialization complete. Launching workers. 00:23:02.068 ======================================================== 00:23:02.068 Latency(us) 00:23:02.068 Device Information : IOPS MiB/s Average min max 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1907.98 81.98 67102.94 671.94 120930.09 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1890.23 81.22 67754.15 755.95 122785.33 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1899.85 81.63 67435.33 814.93 121334.08 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1937.69 83.26 66146.16 628.34 119724.94 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1843.64 79.22 69541.39 906.09 122025.34 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1899.85 81.63 67505.45 858.54 124066.21 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1914.39 82.26 67024.66 844.03 117539.89 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1882.97 80.91 68163.77 908.93 127764.70 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1893.87 81.38 67804.15 687.66 121428.63 00:23:02.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1923.79 82.66 66091.61 682.79 122046.92 00:23:02.068 ======================================================== 00:23:02.068 Total : 18994.26 816.16 67445.09 628.34 127764.70 00:23:02.068 00:23:02.068 [2024-10-11 12:00:04.288279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0d80 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1356bc0 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134cdc0 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c00d0 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bfda0 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1351cc0 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bf740 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bfa70 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347eb0 is same with the state(6) to be set 00:23:02.068 [2024-10-11 12:00:04.288560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0ba0 is same with the state(6) to be set 00:23:02.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:02.068 12:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1999322 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1999322 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1999322 00:23:03.011 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.012 rmmod nvme_tcp 00:23:03.012 rmmod nvme_fabrics 00:23:03.012 rmmod nvme_keyring 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1998939 ']' 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1998939 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1998939 ']' 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1998939 00:23:03.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1998939) - No such process 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1998939 is not found' 00:23:03.012 Process with pid 1998939 is not found 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.012 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.556 00:23:05.556 real 0m10.358s 00:23:05.556 user 0m28.055s 00:23:05.556 sys 0m3.953s 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.556 ************************************ 00:23:05.556 END TEST nvmf_shutdown_tc4 00:23:05.556 ************************************ 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:05.556 00:23:05.556 real 0m43.611s 00:23:05.556 user 1m44.445s 00:23:05.556 sys 0m14.009s 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:05.556 ************************************ 00:23:05.556 END TEST nvmf_shutdown 00:23:05.556 ************************************ 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:05.556 00:23:05.556 real 12m50.242s 00:23:05.556 user 26m56.519s 00:23:05.556 sys 3m52.374s 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.556 12:00:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:05.556 ************************************ 00:23:05.556 END TEST nvmf_target_extra 00:23:05.556 ************************************ 00:23:05.556 12:00:07 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:05.556 12:00:07 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:05.556 12:00:07 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.556 12:00:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:05.556 ************************************ 00:23:05.556 START TEST nvmf_host 00:23:05.556 ************************************ 00:23:05.556 12:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:05.556 * Looking for test storage... 00:23:05.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:05.556 12:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:05.556 12:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:05.556 12:00:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:05.556 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:05.556 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.556 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.556 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.556 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:05.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.557 --rc genhtml_branch_coverage=1 00:23:05.557 --rc genhtml_function_coverage=1 00:23:05.557 --rc genhtml_legend=1 00:23:05.557 --rc geninfo_all_blocks=1 00:23:05.557 --rc geninfo_unexecuted_blocks=1 00:23:05.557 00:23:05.557 ' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:05.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.557 --rc genhtml_branch_coverage=1 00:23:05.557 --rc genhtml_function_coverage=1 00:23:05.557 --rc genhtml_legend=1 00:23:05.557 --rc geninfo_all_blocks=1 00:23:05.557 --rc geninfo_unexecuted_blocks=1 00:23:05.557 00:23:05.557 ' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:05.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.557 --rc genhtml_branch_coverage=1 00:23:05.557 --rc genhtml_function_coverage=1 00:23:05.557 --rc genhtml_legend=1 00:23:05.557 --rc geninfo_all_blocks=1 00:23:05.557 --rc geninfo_unexecuted_blocks=1 00:23:05.557 00:23:05.557 ' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:05.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.557 --rc genhtml_branch_coverage=1 00:23:05.557 --rc genhtml_function_coverage=1 00:23:05.557 --rc genhtml_legend=1 00:23:05.557 --rc geninfo_all_blocks=1 00:23:05.557 --rc geninfo_unexecuted_blocks=1 00:23:05.557 00:23:05.557 ' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.557 ************************************ 00:23:05.557 START TEST nvmf_multicontroller 00:23:05.557 ************************************ 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:05.557 * Looking for test storage... 00:23:05.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:05.557 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.819 --rc genhtml_branch_coverage=1 00:23:05.819 --rc genhtml_function_coverage=1 00:23:05.819 --rc genhtml_legend=1 00:23:05.819 --rc geninfo_all_blocks=1 00:23:05.819 --rc geninfo_unexecuted_blocks=1 00:23:05.819 00:23:05.819 ' 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.819 --rc genhtml_branch_coverage=1 00:23:05.819 --rc genhtml_function_coverage=1 00:23:05.819 --rc genhtml_legend=1 00:23:05.819 --rc geninfo_all_blocks=1 00:23:05.819 --rc geninfo_unexecuted_blocks=1 00:23:05.819 00:23:05.819 ' 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.819 --rc genhtml_branch_coverage=1 00:23:05.819 --rc genhtml_function_coverage=1 00:23:05.819 --rc genhtml_legend=1 00:23:05.819 --rc geninfo_all_blocks=1 00:23:05.819 --rc geninfo_unexecuted_blocks=1 00:23:05.819 00:23:05.819 ' 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.819 --rc genhtml_branch_coverage=1 00:23:05.819 --rc genhtml_function_coverage=1 00:23:05.819 --rc genhtml_legend=1 00:23:05.819 --rc geninfo_all_blocks=1 00:23:05.819 --rc geninfo_unexecuted_blocks=1 00:23:05.819 00:23:05.819 ' 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.819 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.820 12:00:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:13.960 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:13.960 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:13.960 Found net devices under 0000:31:00.0: cvl_0_0 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.960 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:13.961 Found net devices under 0000:31:00.1: cvl_0_1 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.961 12:00:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:23:13.961 00:23:13.961 --- 10.0.0.2 ping statistics --- 00:23:13.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.961 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:23:13.961 00:23:13.961 --- 10.0.0.1 ping statistics --- 00:23:13.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.961 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=2005370 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 2005370 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2005370 ']' 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.961 12:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.961 [2024-10-11 12:00:16.218288] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:13.961 [2024-10-11 12:00:16.218354] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.961 [2024-10-11 12:00:16.311016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:13.961 [2024-10-11 12:00:16.363303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.961 [2024-10-11 12:00:16.363349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.961 [2024-10-11 12:00:16.363358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.961 [2024-10-11 12:00:16.363365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.961 [2024-10-11 12:00:16.363372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.961 [2024-10-11 12:00:16.365262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.961 [2024-10-11 12:00:16.365535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.961 [2024-10-11 12:00:16.365537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 [2024-10-11 12:00:17.071982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 Malloc0 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 [2024-10-11 12:00:17.145589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 [2024-10-11 12:00:17.157474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 Malloc1 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2005719 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:14.534 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2005719 /var/tmp/bdevperf.sock 00:23:14.794 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2005719 ']' 00:23:14.794 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.794 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.794 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.794 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.794 12:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.736 NVMe0n1 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.736 1 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.736 request: 00:23:15.736 { 00:23:15.736 "name": "NVMe0", 00:23:15.736 "trtype": "tcp", 00:23:15.736 "traddr": "10.0.0.2", 00:23:15.736 "adrfam": "ipv4", 00:23:15.736 "trsvcid": "4420", 00:23:15.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.736 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:15.736 "hostaddr": "10.0.0.1", 00:23:15.736 "prchk_reftag": false, 00:23:15.736 "prchk_guard": false, 00:23:15.736 "hdgst": false, 00:23:15.736 "ddgst": false, 00:23:15.736 "allow_unrecognized_csi": false, 00:23:15.736 "method": "bdev_nvme_attach_controller", 00:23:15.736 "req_id": 1 00:23:15.736 } 00:23:15.736 Got JSON-RPC error response 00:23:15.736 response: 00:23:15.736 { 00:23:15.736 "code": -114, 00:23:15.736 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:15.736 } 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:15.736 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.737 request: 00:23:15.737 { 00:23:15.737 "name": "NVMe0", 00:23:15.737 "trtype": "tcp", 00:23:15.737 "traddr": "10.0.0.2", 00:23:15.737 "adrfam": "ipv4", 00:23:15.737 "trsvcid": "4420", 00:23:15.737 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:15.737 "hostaddr": "10.0.0.1", 00:23:15.737 "prchk_reftag": false, 00:23:15.737 "prchk_guard": false, 00:23:15.737 "hdgst": false, 00:23:15.737 "ddgst": false, 00:23:15.737 "allow_unrecognized_csi": false, 00:23:15.737 "method": "bdev_nvme_attach_controller", 00:23:15.737 "req_id": 1 00:23:15.737 } 00:23:15.737 Got JSON-RPC error response 00:23:15.737 response: 00:23:15.737 { 00:23:15.737 "code": -114, 00:23:15.737 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:15.737 } 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.737 request: 00:23:15.737 { 00:23:15.737 "name": "NVMe0", 00:23:15.737 "trtype": "tcp", 00:23:15.737 "traddr": "10.0.0.2", 00:23:15.737 "adrfam": "ipv4", 00:23:15.737 "trsvcid": "4420", 00:23:15.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.737 "hostaddr": "10.0.0.1", 00:23:15.737 "prchk_reftag": false, 00:23:15.737 "prchk_guard": false, 00:23:15.737 "hdgst": false, 00:23:15.737 "ddgst": false, 00:23:15.737 "multipath": "disable", 00:23:15.737 "allow_unrecognized_csi": false, 00:23:15.737 "method": "bdev_nvme_attach_controller", 00:23:15.737 "req_id": 1 00:23:15.737 } 00:23:15.737 Got JSON-RPC error response 00:23:15.737 response: 00:23:15.737 { 00:23:15.737 "code": -114, 00:23:15.737 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:15.737 } 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.737 request: 00:23:15.737 { 00:23:15.737 "name": "NVMe0", 00:23:15.737 "trtype": "tcp", 00:23:15.737 "traddr": "10.0.0.2", 00:23:15.737 "adrfam": "ipv4", 00:23:15.737 "trsvcid": "4420", 00:23:15.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.737 "hostaddr": "10.0.0.1", 00:23:15.737 "prchk_reftag": false, 00:23:15.737 "prchk_guard": false, 00:23:15.737 "hdgst": false, 00:23:15.737 "ddgst": false, 00:23:15.737 "multipath": "failover", 00:23:15.737 "allow_unrecognized_csi": false, 00:23:15.737 "method": "bdev_nvme_attach_controller", 00:23:15.737 "req_id": 1 00:23:15.737 } 00:23:15.737 Got JSON-RPC error response 00:23:15.737 response: 00:23:15.737 { 00:23:15.737 "code": -114, 00:23:15.737 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:15.737 } 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.737 NVMe0n1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.737 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.999 00:23:15.999 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.999 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.999 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:15.999 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.999 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.999 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.999 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:15.999 12:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:17.383 { 00:23:17.383 "results": [ 00:23:17.383 { 00:23:17.383 "job": "NVMe0n1", 00:23:17.384 "core_mask": "0x1", 00:23:17.384 "workload": "write", 00:23:17.384 "status": "finished", 00:23:17.384 "queue_depth": 128, 00:23:17.384 "io_size": 4096, 00:23:17.384 "runtime": 1.006197, 00:23:17.384 "iops": 25119.33547804257, 00:23:17.384 "mibps": 98.12240421110378, 00:23:17.384 "io_failed": 0, 00:23:17.384 "io_timeout": 0, 00:23:17.384 "avg_latency_us": 5080.63545479723, 00:23:17.384 "min_latency_us": 2102.6133333333332, 00:23:17.384 "max_latency_us": 12178.773333333333 00:23:17.384 } 00:23:17.384 ], 00:23:17.384 "core_count": 1 00:23:17.384 } 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2005719 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2005719 ']' 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2005719 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2005719 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2005719' 00:23:17.384 killing process with pid 2005719 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2005719 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2005719 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.384 12:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:17.384 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:17.384 [2024-10-11 12:00:17.287851] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:17.384 [2024-10-11 12:00:17.287932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005719 ] 00:23:17.384 [2024-10-11 12:00:17.371782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.384 [2024-10-11 12:00:17.424798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.384 [2024-10-11 12:00:18.625643] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 24df851b-570d-4553-9251-a20a4885b708 already exists 00:23:17.384 [2024-10-11 12:00:18.625691] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:24df851b-570d-4553-9251-a20a4885b708 alias for bdev NVMe1n1 00:23:17.384 [2024-10-11 12:00:18.625701] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:17.384 Running I/O for 1 seconds... 00:23:17.384 25099.00 IOPS, 98.04 MiB/s 00:23:17.384 Latency(us) 00:23:17.384 [2024-10-11T10:00:20.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.384 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:17.384 NVMe0n1 : 1.01 25119.34 98.12 0.00 0.00 5080.64 2102.61 12178.77 00:23:17.384 [2024-10-11T10:00:20.087Z] =================================================================================================================== 00:23:17.384 [2024-10-11T10:00:20.087Z] Total : 25119.34 98.12 0.00 0.00 5080.64 2102.61 12178.77 00:23:17.384 Received shutdown signal, test time was about 1.000000 seconds 00:23:17.384 00:23:17.384 Latency(us) 00:23:17.384 [2024-10-11T10:00:20.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.384 [2024-10-11T10:00:20.087Z] =================================================================================================================== 00:23:17.384 [2024-10-11T10:00:20.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.384 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.384 rmmod nvme_tcp 00:23:17.384 rmmod nvme_fabrics 00:23:17.384 rmmod nvme_keyring 00:23:17.384 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 2005370 ']' 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 2005370 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2005370 ']' 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2005370 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2005370 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2005370' 00:23:17.645 killing process with pid 2005370 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2005370 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2005370 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.645 12:00:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.703 12:00:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.703 00:23:19.703 real 0m14.229s 00:23:19.703 user 0m17.040s 00:23:19.703 sys 0m6.755s 00:23:19.703 12:00:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.703 12:00:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.703 ************************************ 00:23:19.703 END TEST nvmf_multicontroller 00:23:19.703 ************************************ 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.963 ************************************ 00:23:19.963 START TEST nvmf_aer 00:23:19.963 ************************************ 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:19.963 * Looking for test storage... 00:23:19.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:19.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.963 --rc genhtml_branch_coverage=1 00:23:19.963 --rc genhtml_function_coverage=1 00:23:19.963 --rc genhtml_legend=1 00:23:19.963 --rc geninfo_all_blocks=1 00:23:19.963 --rc geninfo_unexecuted_blocks=1 00:23:19.963 00:23:19.963 ' 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:19.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.963 --rc genhtml_branch_coverage=1 00:23:19.963 --rc genhtml_function_coverage=1 00:23:19.963 --rc genhtml_legend=1 00:23:19.963 --rc geninfo_all_blocks=1 00:23:19.963 --rc geninfo_unexecuted_blocks=1 00:23:19.963 00:23:19.963 ' 00:23:19.963 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:19.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.964 --rc genhtml_branch_coverage=1 00:23:19.964 --rc genhtml_function_coverage=1 00:23:19.964 --rc genhtml_legend=1 00:23:19.964 --rc geninfo_all_blocks=1 00:23:19.964 --rc geninfo_unexecuted_blocks=1 00:23:19.964 00:23:19.964 ' 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:19.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.964 --rc genhtml_branch_coverage=1 00:23:19.964 --rc genhtml_function_coverage=1 00:23:19.964 --rc genhtml_legend=1 00:23:19.964 --rc geninfo_all_blocks=1 00:23:19.964 --rc geninfo_unexecuted_blocks=1 00:23:19.964 00:23:19.964 ' 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.964 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:20.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:20.224 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.225 12:00:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:28.366 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:28.366 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:28.366 Found net devices under 0000:31:00.0: cvl_0_0 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:28.366 Found net devices under 0000:31:00.1: cvl_0_1 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:28.366 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.367 12:00:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:23:28.367 00:23:28.367 --- 10.0.0.2 ping statistics --- 00:23:28.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.367 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:23:28.367 00:23:28.367 --- 10.0.0.1 ping statistics --- 00:23:28.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.367 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=2010476 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 2010476 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2010476 ']' 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.367 12:00:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.367 [2024-10-11 12:00:30.368188] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:28.367 [2024-10-11 12:00:30.368254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.367 [2024-10-11 12:00:30.458770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.367 [2024-10-11 12:00:30.512784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.367 [2024-10-11 12:00:30.512834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.367 [2024-10-11 12:00:30.512843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.367 [2024-10-11 12:00:30.512851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.367 [2024-10-11 12:00:30.512857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.367 [2024-10-11 12:00:30.515027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.367 [2024-10-11 12:00:30.515190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.367 [2024-10-11 12:00:30.515480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.367 [2024-10-11 12:00:30.515484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.629 [2024-10-11 12:00:31.247544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.629 Malloc0 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.629 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.629 [2024-10-11 12:00:31.332696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:28.890 [ 00:23:28.890 { 00:23:28.890 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:28.890 "subtype": "Discovery", 00:23:28.890 "listen_addresses": [], 00:23:28.890 "allow_any_host": true, 00:23:28.890 "hosts": [] 00:23:28.890 }, 00:23:28.890 { 00:23:28.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.890 "subtype": "NVMe", 00:23:28.890 "listen_addresses": [ 00:23:28.890 { 00:23:28.890 "trtype": "TCP", 00:23:28.890 "adrfam": "IPv4", 00:23:28.890 "traddr": "10.0.0.2", 00:23:28.890 "trsvcid": "4420" 00:23:28.890 } 00:23:28.890 ], 00:23:28.890 "allow_any_host": true, 00:23:28.890 "hosts": [], 00:23:28.890 "serial_number": "SPDK00000000000001", 00:23:28.890 "model_number": "SPDK bdev Controller", 00:23:28.890 "max_namespaces": 2, 00:23:28.890 "min_cntlid": 1, 00:23:28.890 "max_cntlid": 65519, 00:23:28.890 "namespaces": [ 00:23:28.890 { 00:23:28.890 "nsid": 1, 00:23:28.890 "bdev_name": "Malloc0", 00:23:28.890 "name": "Malloc0", 00:23:28.890 "nguid": "89BEB237CF7742E883E1B0D6B8999F27", 00:23:28.890 "uuid": "89beb237-cf77-42e8-83e1-b0d6b8999f27" 00:23:28.890 } 00:23:28.890 ] 00:23:28.890 } 00:23:28.890 ] 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2010830 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.890 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.152 Malloc1 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.152 Asynchronous Event Request test 00:23:29.152 Attaching to 10.0.0.2 00:23:29.152 Attached to 10.0.0.2 00:23:29.152 Registering asynchronous event callbacks... 00:23:29.152 Starting namespace attribute notice tests for all controllers... 00:23:29.152 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:29.152 aer_cb - Changed Namespace 00:23:29.152 Cleaning up... 00:23:29.152 [ 00:23:29.152 { 00:23:29.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:29.152 "subtype": "Discovery", 00:23:29.152 "listen_addresses": [], 00:23:29.152 "allow_any_host": true, 00:23:29.152 "hosts": [] 00:23:29.152 }, 00:23:29.152 { 00:23:29.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.152 "subtype": "NVMe", 00:23:29.152 "listen_addresses": [ 00:23:29.152 { 00:23:29.152 "trtype": "TCP", 00:23:29.152 "adrfam": "IPv4", 00:23:29.152 "traddr": "10.0.0.2", 00:23:29.152 "trsvcid": "4420" 00:23:29.152 } 00:23:29.152 ], 00:23:29.152 "allow_any_host": true, 00:23:29.152 "hosts": [], 00:23:29.152 "serial_number": "SPDK00000000000001", 00:23:29.152 "model_number": "SPDK bdev Controller", 00:23:29.152 "max_namespaces": 2, 00:23:29.152 "min_cntlid": 1, 00:23:29.152 "max_cntlid": 65519, 00:23:29.152 "namespaces": [ 00:23:29.152 { 00:23:29.152 "nsid": 1, 00:23:29.152 "bdev_name": "Malloc0", 00:23:29.152 "name": "Malloc0", 00:23:29.152 "nguid": "89BEB237CF7742E883E1B0D6B8999F27", 00:23:29.152 "uuid": "89beb237-cf77-42e8-83e1-b0d6b8999f27" 00:23:29.152 }, 00:23:29.152 { 00:23:29.152 "nsid": 2, 00:23:29.152 "bdev_name": "Malloc1", 00:23:29.152 "name": "Malloc1", 00:23:29.152 "nguid": "B68A28589DBF412990AF7965E9A3125C", 00:23:29.152 "uuid": "b68a2858-9dbf-4129-90af-7965e9a3125c" 00:23:29.152 } 00:23:29.152 ] 00:23:29.152 } 00:23:29.152 ] 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2010830 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:29.152 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.153 rmmod nvme_tcp 00:23:29.153 rmmod nvme_fabrics 00:23:29.153 rmmod nvme_keyring 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 2010476 ']' 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 2010476 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2010476 ']' 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2010476 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.153 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2010476 00:23:29.414 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.414 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.414 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2010476' 00:23:29.414 killing process with pid 2010476 00:23:29.414 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2010476 00:23:29.414 12:00:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2010476 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.414 12:00:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:31.961 00:23:31.961 real 0m11.647s 00:23:31.961 user 0m8.162s 00:23:31.961 sys 0m6.329s 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.961 ************************************ 00:23:31.961 END TEST nvmf_aer 00:23:31.961 ************************************ 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.961 ************************************ 00:23:31.961 START TEST nvmf_async_init 00:23:31.961 ************************************ 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:31.961 * Looking for test storage... 00:23:31.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:31.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.961 --rc genhtml_branch_coverage=1 00:23:31.961 --rc genhtml_function_coverage=1 00:23:31.961 --rc genhtml_legend=1 00:23:31.961 --rc geninfo_all_blocks=1 00:23:31.961 --rc geninfo_unexecuted_blocks=1 00:23:31.961 00:23:31.961 ' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:31.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.961 --rc genhtml_branch_coverage=1 00:23:31.961 --rc genhtml_function_coverage=1 00:23:31.961 --rc genhtml_legend=1 00:23:31.961 --rc geninfo_all_blocks=1 00:23:31.961 --rc geninfo_unexecuted_blocks=1 00:23:31.961 00:23:31.961 ' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:31.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.961 --rc genhtml_branch_coverage=1 00:23:31.961 --rc genhtml_function_coverage=1 00:23:31.961 --rc genhtml_legend=1 00:23:31.961 --rc geninfo_all_blocks=1 00:23:31.961 --rc geninfo_unexecuted_blocks=1 00:23:31.961 00:23:31.961 ' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:31.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:31.961 --rc genhtml_branch_coverage=1 00:23:31.961 --rc genhtml_function_coverage=1 00:23:31.961 --rc genhtml_legend=1 00:23:31.961 --rc geninfo_all_blocks=1 00:23:31.961 --rc geninfo_unexecuted_blocks=1 00:23:31.961 00:23:31.961 ' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.961 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fbdacb7af3144e52aba80f7b1cf04e40 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.962 12:00:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:40.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:40.137 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:40.137 Found net devices under 0000:31:00.0: cvl_0_0 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.137 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:40.138 Found net devices under 0000:31:00.1: cvl_0_1 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.138 12:00:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:23:40.138 00:23:40.138 --- 10.0.0.2 ping statistics --- 00:23:40.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.138 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:23:40.138 00:23:40.138 --- 10.0.0.1 ping statistics --- 00:23:40.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.138 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=2015210 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 2015210 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2015210 ']' 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.138 12:00:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.138 [2024-10-11 12:00:42.210614] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:40.138 [2024-10-11 12:00:42.210689] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.138 [2024-10-11 12:00:42.302762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.138 [2024-10-11 12:00:42.353656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.138 [2024-10-11 12:00:42.353707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.138 [2024-10-11 12:00:42.353716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.138 [2024-10-11 12:00:42.353724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.138 [2024-10-11 12:00:42.353730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.138 [2024-10-11 12:00:42.354578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.399 [2024-10-11 12:00:43.092574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:40.399 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.400 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.660 null0 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fbdacb7af3144e52aba80f7b1cf04e40 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.660 [2024-10-11 12:00:43.152938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.660 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.921 nvme0n1 00:23:40.921 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.922 [ 00:23:40.922 { 00:23:40.922 "name": "nvme0n1", 00:23:40.922 "aliases": [ 00:23:40.922 "fbdacb7a-f314-4e52-aba8-0f7b1cf04e40" 00:23:40.922 ], 00:23:40.922 "product_name": "NVMe disk", 00:23:40.922 "block_size": 512, 00:23:40.922 "num_blocks": 2097152, 00:23:40.922 "uuid": "fbdacb7a-f314-4e52-aba8-0f7b1cf04e40", 00:23:40.922 "numa_id": 0, 00:23:40.922 "assigned_rate_limits": { 00:23:40.922 "rw_ios_per_sec": 0, 00:23:40.922 "rw_mbytes_per_sec": 0, 00:23:40.922 "r_mbytes_per_sec": 0, 00:23:40.922 "w_mbytes_per_sec": 0 00:23:40.922 }, 00:23:40.922 "claimed": false, 00:23:40.922 "zoned": false, 00:23:40.922 "supported_io_types": { 00:23:40.922 "read": true, 00:23:40.922 "write": true, 00:23:40.922 "unmap": false, 00:23:40.922 "flush": true, 00:23:40.922 "reset": true, 00:23:40.922 "nvme_admin": true, 00:23:40.922 "nvme_io": true, 00:23:40.922 "nvme_io_md": false, 00:23:40.922 "write_zeroes": true, 00:23:40.922 "zcopy": false, 00:23:40.922 "get_zone_info": false, 00:23:40.922 "zone_management": false, 00:23:40.922 "zone_append": false, 00:23:40.922 "compare": true, 00:23:40.922 "compare_and_write": true, 00:23:40.922 "abort": true, 00:23:40.922 "seek_hole": false, 00:23:40.922 "seek_data": false, 00:23:40.922 "copy": true, 00:23:40.922 "nvme_iov_md": false 00:23:40.922 }, 00:23:40.922 "memory_domains": [ 00:23:40.922 { 00:23:40.922 "dma_device_id": "system", 00:23:40.922 "dma_device_type": 1 00:23:40.922 } 00:23:40.922 ], 00:23:40.922 "driver_specific": { 00:23:40.922 "nvme": [ 00:23:40.922 { 00:23:40.922 "trid": { 00:23:40.922 "trtype": "TCP", 00:23:40.922 "adrfam": "IPv4", 00:23:40.922 "traddr": "10.0.0.2", 00:23:40.922 "trsvcid": "4420", 00:23:40.922 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.922 }, 00:23:40.922 "ctrlr_data": { 00:23:40.922 "cntlid": 1, 00:23:40.922 "vendor_id": "0x8086", 00:23:40.922 "model_number": "SPDK bdev Controller", 00:23:40.922 "serial_number": "00000000000000000000", 00:23:40.922 "firmware_revision": "25.01", 00:23:40.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.922 "oacs": { 00:23:40.922 "security": 0, 00:23:40.922 "format": 0, 00:23:40.922 "firmware": 0, 00:23:40.922 "ns_manage": 0 00:23:40.922 }, 00:23:40.922 "multi_ctrlr": true, 00:23:40.922 "ana_reporting": false 00:23:40.922 }, 00:23:40.922 "vs": { 00:23:40.922 "nvme_version": "1.3" 00:23:40.922 }, 00:23:40.922 "ns_data": { 00:23:40.922 "id": 1, 00:23:40.922 "can_share": true 00:23:40.922 } 00:23:40.922 } 00:23:40.922 ], 00:23:40.922 "mp_policy": "active_passive" 00:23:40.922 } 00:23:40.922 } 00:23:40.922 ] 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.922 [2024-10-11 12:00:43.430703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.922 [2024-10-11 12:00:43.430800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1068050 (9): Bad file descriptor 00:23:40.922 [2024-10-11 12:00:43.563181] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.922 [ 00:23:40.922 { 00:23:40.922 "name": "nvme0n1", 00:23:40.922 "aliases": [ 00:23:40.922 "fbdacb7a-f314-4e52-aba8-0f7b1cf04e40" 00:23:40.922 ], 00:23:40.922 "product_name": "NVMe disk", 00:23:40.922 "block_size": 512, 00:23:40.922 "num_blocks": 2097152, 00:23:40.922 "uuid": "fbdacb7a-f314-4e52-aba8-0f7b1cf04e40", 00:23:40.922 "numa_id": 0, 00:23:40.922 "assigned_rate_limits": { 00:23:40.922 "rw_ios_per_sec": 0, 00:23:40.922 "rw_mbytes_per_sec": 0, 00:23:40.922 "r_mbytes_per_sec": 0, 00:23:40.922 "w_mbytes_per_sec": 0 00:23:40.922 }, 00:23:40.922 "claimed": false, 00:23:40.922 "zoned": false, 00:23:40.922 "supported_io_types": { 00:23:40.922 "read": true, 00:23:40.922 "write": true, 00:23:40.922 "unmap": false, 00:23:40.922 "flush": true, 00:23:40.922 "reset": true, 00:23:40.922 "nvme_admin": true, 00:23:40.922 "nvme_io": true, 00:23:40.922 "nvme_io_md": false, 00:23:40.922 "write_zeroes": true, 00:23:40.922 "zcopy": false, 00:23:40.922 "get_zone_info": false, 00:23:40.922 "zone_management": false, 00:23:40.922 "zone_append": false, 00:23:40.922 "compare": true, 00:23:40.922 "compare_and_write": true, 00:23:40.922 "abort": true, 00:23:40.922 "seek_hole": false, 00:23:40.922 "seek_data": false, 00:23:40.922 "copy": true, 00:23:40.922 "nvme_iov_md": false 00:23:40.922 }, 00:23:40.922 "memory_domains": [ 00:23:40.922 { 00:23:40.922 "dma_device_id": "system", 00:23:40.922 "dma_device_type": 1 00:23:40.922 } 00:23:40.922 ], 00:23:40.922 "driver_specific": { 00:23:40.922 "nvme": [ 00:23:40.922 { 00:23:40.922 "trid": { 00:23:40.922 "trtype": "TCP", 00:23:40.922 "adrfam": "IPv4", 00:23:40.922 "traddr": "10.0.0.2", 00:23:40.922 "trsvcid": "4420", 00:23:40.922 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.922 }, 00:23:40.922 "ctrlr_data": { 00:23:40.922 "cntlid": 2, 00:23:40.922 "vendor_id": "0x8086", 00:23:40.922 "model_number": "SPDK bdev Controller", 00:23:40.922 "serial_number": "00000000000000000000", 00:23:40.922 "firmware_revision": "25.01", 00:23:40.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.922 "oacs": { 00:23:40.922 "security": 0, 00:23:40.922 "format": 0, 00:23:40.922 "firmware": 0, 00:23:40.922 "ns_manage": 0 00:23:40.922 }, 00:23:40.922 "multi_ctrlr": true, 00:23:40.922 "ana_reporting": false 00:23:40.922 }, 00:23:40.922 "vs": { 00:23:40.922 "nvme_version": "1.3" 00:23:40.922 }, 00:23:40.922 "ns_data": { 00:23:40.922 "id": 1, 00:23:40.922 "can_share": true 00:23:40.922 } 00:23:40.922 } 00:23:40.922 ], 00:23:40.922 "mp_policy": "active_passive" 00:23:40.922 } 00:23:40.922 } 00:23:40.922 ] 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DE5ThPj0Mx 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:40.922 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DE5ThPj0Mx 00:23:41.183 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.DE5ThPj0Mx 00:23:41.183 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.184 [2024-10-11 12:00:43.655467] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.184 [2024-10-11 12:00:43.655643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.184 [2024-10-11 12:00:43.679548] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.184 nvme0n1 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.184 [ 00:23:41.184 { 00:23:41.184 "name": "nvme0n1", 00:23:41.184 "aliases": [ 00:23:41.184 "fbdacb7a-f314-4e52-aba8-0f7b1cf04e40" 00:23:41.184 ], 00:23:41.184 "product_name": "NVMe disk", 00:23:41.184 "block_size": 512, 00:23:41.184 "num_blocks": 2097152, 00:23:41.184 "uuid": "fbdacb7a-f314-4e52-aba8-0f7b1cf04e40", 00:23:41.184 "numa_id": 0, 00:23:41.184 "assigned_rate_limits": { 00:23:41.184 "rw_ios_per_sec": 0, 00:23:41.184 "rw_mbytes_per_sec": 0, 00:23:41.184 "r_mbytes_per_sec": 0, 00:23:41.184 "w_mbytes_per_sec": 0 00:23:41.184 }, 00:23:41.184 "claimed": false, 00:23:41.184 "zoned": false, 00:23:41.184 "supported_io_types": { 00:23:41.184 "read": true, 00:23:41.184 "write": true, 00:23:41.184 "unmap": false, 00:23:41.184 "flush": true, 00:23:41.184 "reset": true, 00:23:41.184 "nvme_admin": true, 00:23:41.184 "nvme_io": true, 00:23:41.184 "nvme_io_md": false, 00:23:41.184 "write_zeroes": true, 00:23:41.184 "zcopy": false, 00:23:41.184 "get_zone_info": false, 00:23:41.184 "zone_management": false, 00:23:41.184 "zone_append": false, 00:23:41.184 "compare": true, 00:23:41.184 "compare_and_write": true, 00:23:41.184 "abort": true, 00:23:41.184 "seek_hole": false, 00:23:41.184 "seek_data": false, 00:23:41.184 "copy": true, 00:23:41.184 "nvme_iov_md": false 00:23:41.184 }, 00:23:41.184 "memory_domains": [ 00:23:41.184 { 00:23:41.184 "dma_device_id": "system", 00:23:41.184 "dma_device_type": 1 00:23:41.184 } 00:23:41.184 ], 00:23:41.184 "driver_specific": { 00:23:41.184 "nvme": [ 00:23:41.184 { 00:23:41.184 "trid": { 00:23:41.184 "trtype": "TCP", 00:23:41.184 "adrfam": "IPv4", 00:23:41.184 "traddr": "10.0.0.2", 00:23:41.184 "trsvcid": "4421", 00:23:41.184 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:41.184 }, 00:23:41.184 "ctrlr_data": { 00:23:41.184 "cntlid": 3, 00:23:41.184 "vendor_id": "0x8086", 00:23:41.184 "model_number": "SPDK bdev Controller", 00:23:41.184 "serial_number": "00000000000000000000", 00:23:41.184 "firmware_revision": "25.01", 00:23:41.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:41.184 "oacs": { 00:23:41.184 "security": 0, 00:23:41.184 "format": 0, 00:23:41.184 "firmware": 0, 00:23:41.184 "ns_manage": 0 00:23:41.184 }, 00:23:41.184 "multi_ctrlr": true, 00:23:41.184 "ana_reporting": false 00:23:41.184 }, 00:23:41.184 "vs": { 00:23:41.184 "nvme_version": "1.3" 00:23:41.184 }, 00:23:41.184 "ns_data": { 00:23:41.184 "id": 1, 00:23:41.184 "can_share": true 00:23:41.184 } 00:23:41.184 } 00:23:41.184 ], 00:23:41.184 "mp_policy": "active_passive" 00:23:41.184 } 00:23:41.184 } 00:23:41.184 ] 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.DE5ThPj0Mx 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.184 rmmod nvme_tcp 00:23:41.184 rmmod nvme_fabrics 00:23:41.184 rmmod nvme_keyring 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 2015210 ']' 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 2015210 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2015210 ']' 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2015210 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.184 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2015210 00:23:41.445 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:41.445 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:41.445 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2015210' 00:23:41.445 killing process with pid 2015210 00:23:41.445 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2015210 00:23:41.445 12:00:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2015210 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.445 12:00:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.990 00:23:43.990 real 0m11.994s 00:23:43.990 user 0m4.218s 00:23:43.990 sys 0m6.365s 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.990 ************************************ 00:23:43.990 END TEST nvmf_async_init 00:23:43.990 ************************************ 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.990 ************************************ 00:23:43.990 START TEST dma 00:23:43.990 ************************************ 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:43.990 * Looking for test storage... 00:23:43.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:43.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.990 --rc genhtml_branch_coverage=1 00:23:43.990 --rc genhtml_function_coverage=1 00:23:43.990 --rc genhtml_legend=1 00:23:43.990 --rc geninfo_all_blocks=1 00:23:43.990 --rc geninfo_unexecuted_blocks=1 00:23:43.990 00:23:43.990 ' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:43.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.990 --rc genhtml_branch_coverage=1 00:23:43.990 --rc genhtml_function_coverage=1 00:23:43.990 --rc genhtml_legend=1 00:23:43.990 --rc geninfo_all_blocks=1 00:23:43.990 --rc geninfo_unexecuted_blocks=1 00:23:43.990 00:23:43.990 ' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:43.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.990 --rc genhtml_branch_coverage=1 00:23:43.990 --rc genhtml_function_coverage=1 00:23:43.990 --rc genhtml_legend=1 00:23:43.990 --rc geninfo_all_blocks=1 00:23:43.990 --rc geninfo_unexecuted_blocks=1 00:23:43.990 00:23:43.990 ' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:43.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.990 --rc genhtml_branch_coverage=1 00:23:43.990 --rc genhtml_function_coverage=1 00:23:43.990 --rc genhtml_legend=1 00:23:43.990 --rc geninfo_all_blocks=1 00:23:43.990 --rc geninfo_unexecuted_blocks=1 00:23:43.990 00:23:43.990 ' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.990 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:43.991 00:23:43.991 real 0m0.237s 00:23:43.991 user 0m0.139s 00:23:43.991 sys 0m0.114s 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:43.991 ************************************ 00:23:43.991 END TEST dma 00:23:43.991 ************************************ 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.991 ************************************ 00:23:43.991 START TEST nvmf_identify 00:23:43.991 ************************************ 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:43.991 * Looking for test storage... 00:23:43.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:43.991 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.252 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:44.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.253 --rc genhtml_branch_coverage=1 00:23:44.253 --rc genhtml_function_coverage=1 00:23:44.253 --rc genhtml_legend=1 00:23:44.253 --rc geninfo_all_blocks=1 00:23:44.253 --rc geninfo_unexecuted_blocks=1 00:23:44.253 00:23:44.253 ' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:44.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.253 --rc genhtml_branch_coverage=1 00:23:44.253 --rc genhtml_function_coverage=1 00:23:44.253 --rc genhtml_legend=1 00:23:44.253 --rc geninfo_all_blocks=1 00:23:44.253 --rc geninfo_unexecuted_blocks=1 00:23:44.253 00:23:44.253 ' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:44.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.253 --rc genhtml_branch_coverage=1 00:23:44.253 --rc genhtml_function_coverage=1 00:23:44.253 --rc genhtml_legend=1 00:23:44.253 --rc geninfo_all_blocks=1 00:23:44.253 --rc geninfo_unexecuted_blocks=1 00:23:44.253 00:23:44.253 ' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:44.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.253 --rc genhtml_branch_coverage=1 00:23:44.253 --rc genhtml_function_coverage=1 00:23:44.253 --rc genhtml_legend=1 00:23:44.253 --rc geninfo_all_blocks=1 00:23:44.253 --rc geninfo_unexecuted_blocks=1 00:23:44.253 00:23:44.253 ' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.253 12:00:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:52.398 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:52.398 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:52.398 Found net devices under 0000:31:00.0: cvl_0_0 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:52.398 Found net devices under 0000:31:00.1: cvl_0_1 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.398 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:23:52.399 00:23:52.399 --- 10.0.0.2 ping statistics --- 00:23:52.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.399 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:23:52.399 00:23:52.399 --- 10.0.0.1 ping statistics --- 00:23:52.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.399 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2019874 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2019874 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2019874 ']' 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.399 12:00:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.399 [2024-10-11 12:00:54.606603] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:52.399 [2024-10-11 12:00:54.606672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.399 [2024-10-11 12:00:54.696468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.399 [2024-10-11 12:00:54.751171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.399 [2024-10-11 12:00:54.751224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.399 [2024-10-11 12:00:54.751232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.399 [2024-10-11 12:00:54.751240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.399 [2024-10-11 12:00:54.751246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.399 [2024-10-11 12:00:54.753367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.399 [2024-10-11 12:00:54.753527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.399 [2024-10-11 12:00:54.753686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.399 [2024-10-11 12:00:54.753686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.972 [2024-10-11 12:00:55.438001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.972 Malloc0 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.972 [2024-10-11 12:00:55.562173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.972 [ 00:23:52.972 { 00:23:52.972 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:52.972 "subtype": "Discovery", 00:23:52.972 "listen_addresses": [ 00:23:52.972 { 00:23:52.972 "trtype": "TCP", 00:23:52.972 "adrfam": "IPv4", 00:23:52.972 "traddr": "10.0.0.2", 00:23:52.972 "trsvcid": "4420" 00:23:52.972 } 00:23:52.972 ], 00:23:52.972 "allow_any_host": true, 00:23:52.972 "hosts": [] 00:23:52.972 }, 00:23:52.972 { 00:23:52.972 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.972 "subtype": "NVMe", 00:23:52.972 "listen_addresses": [ 00:23:52.972 { 00:23:52.972 "trtype": "TCP", 00:23:52.972 "adrfam": "IPv4", 00:23:52.972 "traddr": "10.0.0.2", 00:23:52.972 "trsvcid": "4420" 00:23:52.972 } 00:23:52.972 ], 00:23:52.972 "allow_any_host": true, 00:23:52.972 "hosts": [], 00:23:52.972 "serial_number": "SPDK00000000000001", 00:23:52.972 "model_number": "SPDK bdev Controller", 00:23:52.972 "max_namespaces": 32, 00:23:52.972 "min_cntlid": 1, 00:23:52.972 "max_cntlid": 65519, 00:23:52.972 "namespaces": [ 00:23:52.972 { 00:23:52.972 "nsid": 1, 00:23:52.972 "bdev_name": "Malloc0", 00:23:52.972 "name": "Malloc0", 00:23:52.972 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:52.972 "eui64": "ABCDEF0123456789", 00:23:52.972 "uuid": "535bc999-bc07-410c-894a-fff79d793bb7" 00:23:52.972 } 00:23:52.972 ] 00:23:52.972 } 00:23:52.972 ] 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.972 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:52.972 [2024-10-11 12:00:55.626010] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:52.972 [2024-10-11 12:00:55.626060] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020056 ] 00:23:52.972 [2024-10-11 12:00:55.661408] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:52.972 [2024-10-11 12:00:55.661472] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:52.972 [2024-10-11 12:00:55.661478] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:52.972 [2024-10-11 12:00:55.661495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:52.972 [2024-10-11 12:00:55.661506] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:52.972 [2024-10-11 12:00:55.665494] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:52.972 [2024-10-11 12:00:55.665543] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13a95f0 0 00:23:52.972 [2024-10-11 12:00:55.673080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:52.972 [2024-10-11 12:00:55.673098] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:52.972 [2024-10-11 12:00:55.673104] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:52.972 [2024-10-11 12:00:55.673108] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:52.972 [2024-10-11 12:00:55.673154] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:52.972 [2024-10-11 12:00:55.673160] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:52.972 [2024-10-11 12:00:55.673164] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:52.972 [2024-10-11 12:00:55.673182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:52.972 [2024-10-11 12:00:55.673206] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.236 [2024-10-11 12:00:55.681077] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.236 [2024-10-11 12:00:55.681089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.236 [2024-10-11 12:00:55.681092] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681097] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.236 [2024-10-11 12:00:55.681112] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:53.236 [2024-10-11 12:00:55.681121] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:53.236 [2024-10-11 12:00:55.681127] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:53.236 [2024-10-11 12:00:55.681143] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.236 [2024-10-11 12:00:55.681159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.236 [2024-10-11 12:00:55.681175] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.236 [2024-10-11 12:00:55.681383] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.236 [2024-10-11 12:00:55.681389] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.236 [2024-10-11 12:00:55.681393] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681402] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.236 [2024-10-11 12:00:55.681409] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:53.236 [2024-10-11 12:00:55.681417] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:53.236 [2024-10-11 12:00:55.681424] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681428] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681431] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.236 [2024-10-11 12:00:55.681438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.236 [2024-10-11 12:00:55.681449] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.236 [2024-10-11 12:00:55.681634] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.236 [2024-10-11 12:00:55.681640] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.236 [2024-10-11 12:00:55.681644] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681648] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.236 [2024-10-11 12:00:55.681653] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:53.236 [2024-10-11 12:00:55.681662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:53.236 [2024-10-11 12:00:55.681668] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.236 [2024-10-11 12:00:55.681682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.236 [2024-10-11 12:00:55.681693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.236 [2024-10-11 12:00:55.681882] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.236 [2024-10-11 12:00:55.681888] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.236 [2024-10-11 12:00:55.681892] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681896] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.236 [2024-10-11 12:00:55.681901] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:53.236 [2024-10-11 12:00:55.681911] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681915] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.681918] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.236 [2024-10-11 12:00:55.681925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.236 [2024-10-11 12:00:55.681935] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.236 [2024-10-11 12:00:55.682116] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.236 [2024-10-11 12:00:55.682123] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.236 [2024-10-11 12:00:55.682127] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.236 [2024-10-11 12:00:55.682130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.237 [2024-10-11 12:00:55.682135] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:53.237 [2024-10-11 12:00:55.682143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:53.237 [2024-10-11 12:00:55.682151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:53.237 [2024-10-11 12:00:55.682257] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:53.237 [2024-10-11 12:00:55.682262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:53.237 [2024-10-11 12:00:55.682273] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.682277] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.682281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.682287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.237 [2024-10-11 12:00:55.682298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.237 [2024-10-11 12:00:55.682478] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.237 [2024-10-11 12:00:55.682485] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.237 [2024-10-11 12:00:55.682488] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.682492] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.237 [2024-10-11 12:00:55.682497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:53.237 [2024-10-11 12:00:55.682506] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.682510] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.682513] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.682520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.237 [2024-10-11 12:00:55.682530] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.237 [2024-10-11 12:00:55.682718] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.237 [2024-10-11 12:00:55.682724] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.237 [2024-10-11 12:00:55.682728] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.682732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.237 [2024-10-11 12:00:55.682736] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:53.237 [2024-10-11 12:00:55.682741] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:53.237 [2024-10-11 12:00:55.682749] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:53.237 [2024-10-11 12:00:55.682758] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:53.237 [2024-10-11 12:00:55.682768] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.682772] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.682779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.237 [2024-10-11 12:00:55.682789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.237 [2024-10-11 12:00:55.683098] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.237 [2024-10-11 12:00:55.683105] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.237 [2024-10-11 12:00:55.683108] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.683113] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a95f0): datao=0, datal=4096, cccid=0 00:23:53.237 [2024-10-11 12:00:55.683118] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14138c0) on tqpair(0x13a95f0): expected_datao=0, payload_size=4096 00:23:53.237 [2024-10-11 12:00:55.683123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.683136] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.683141] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729072] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.237 [2024-10-11 12:00:55.729085] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.237 [2024-10-11 12:00:55.729088] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729092] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.237 [2024-10-11 12:00:55.729103] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:53.237 [2024-10-11 12:00:55.729110] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:53.237 [2024-10-11 12:00:55.729115] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:53.237 [2024-10-11 12:00:55.729121] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:53.237 [2024-10-11 12:00:55.729127] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:53.237 [2024-10-11 12:00:55.729132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:53.237 [2024-10-11 12:00:55.729141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:53.237 [2024-10-11 12:00:55.729149] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729154] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729157] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.729166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:53.237 [2024-10-11 12:00:55.729180] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.237 [2024-10-11 12:00:55.729376] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.237 [2024-10-11 12:00:55.729383] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.237 [2024-10-11 12:00:55.729386] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729390] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.237 [2024-10-11 12:00:55.729400] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729403] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729407] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.729413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.237 [2024-10-11 12:00:55.729419] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729423] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729431] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.729437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.237 [2024-10-11 12:00:55.729443] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729447] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.729456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.237 [2024-10-11 12:00:55.729462] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729466] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.729475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.237 [2024-10-11 12:00:55.729480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:53.237 [2024-10-11 12:00:55.729493] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:53.237 [2024-10-11 12:00:55.729499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729503] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.729510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.237 [2024-10-11 12:00:55.729522] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14138c0, cid 0, qid 0 00:23:53.237 [2024-10-11 12:00:55.729527] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413a40, cid 1, qid 0 00:23:53.237 [2024-10-11 12:00:55.729532] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413bc0, cid 2, qid 0 00:23:53.237 [2024-10-11 12:00:55.729537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.237 [2024-10-11 12:00:55.729542] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413ec0, cid 4, qid 0 00:23:53.237 [2024-10-11 12:00:55.729764] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.237 [2024-10-11 12:00:55.729771] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.237 [2024-10-11 12:00:55.729774] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729778] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413ec0) on tqpair=0x13a95f0 00:23:53.237 [2024-10-11 12:00:55.729784] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:53.237 [2024-10-11 12:00:55.729790] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:53.237 [2024-10-11 12:00:55.729802] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.729806] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a95f0) 00:23:53.237 [2024-10-11 12:00:55.729812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.237 [2024-10-11 12:00:55.729822] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413ec0, cid 4, qid 0 00:23:53.237 [2024-10-11 12:00:55.730055] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.237 [2024-10-11 12:00:55.730068] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.237 [2024-10-11 12:00:55.730072] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.730079] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a95f0): datao=0, datal=4096, cccid=4 00:23:53.237 [2024-10-11 12:00:55.730083] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1413ec0) on tqpair(0x13a95f0): expected_datao=0, payload_size=4096 00:23:53.237 [2024-10-11 12:00:55.730088] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.237 [2024-10-11 12:00:55.730095] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730099] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730299] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.238 [2024-10-11 12:00:55.730305] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.238 [2024-10-11 12:00:55.730308] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730312] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413ec0) on tqpair=0x13a95f0 00:23:53.238 [2024-10-11 12:00:55.730328] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:53.238 [2024-10-11 12:00:55.730360] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730364] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a95f0) 00:23:53.238 [2024-10-11 12:00:55.730370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.238 [2024-10-11 12:00:55.730378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730382] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730386] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13a95f0) 00:23:53.238 [2024-10-11 12:00:55.730392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.238 [2024-10-11 12:00:55.730405] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413ec0, cid 4, qid 0 00:23:53.238 [2024-10-11 12:00:55.730410] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1414040, cid 5, qid 0 00:23:53.238 [2024-10-11 12:00:55.730667] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.238 [2024-10-11 12:00:55.730673] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.238 [2024-10-11 12:00:55.730677] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730680] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a95f0): datao=0, datal=1024, cccid=4 00:23:53.238 [2024-10-11 12:00:55.730685] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1413ec0) on tqpair(0x13a95f0): expected_datao=0, payload_size=1024 00:23:53.238 [2024-10-11 12:00:55.730689] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730696] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730699] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730705] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.238 [2024-10-11 12:00:55.730711] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.238 [2024-10-11 12:00:55.730714] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.730718] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1414040) on tqpair=0x13a95f0 00:23:53.238 [2024-10-11 12:00:55.775071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.238 [2024-10-11 12:00:55.775084] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.238 [2024-10-11 12:00:55.775087] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.775091] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413ec0) on tqpair=0x13a95f0 00:23:53.238 [2024-10-11 12:00:55.775113] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.775121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a95f0) 00:23:53.238 [2024-10-11 12:00:55.775130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.238 [2024-10-11 12:00:55.775148] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413ec0, cid 4, qid 0 00:23:53.238 [2024-10-11 12:00:55.775415] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.238 [2024-10-11 12:00:55.775422] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.238 [2024-10-11 12:00:55.775425] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.775429] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a95f0): datao=0, datal=3072, cccid=4 00:23:53.238 [2024-10-11 12:00:55.775433] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1413ec0) on tqpair(0x13a95f0): expected_datao=0, payload_size=3072 00:23:53.238 [2024-10-11 12:00:55.775438] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.775455] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.775459] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.817219] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.238 [2024-10-11 12:00:55.817229] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.238 [2024-10-11 12:00:55.817232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.817236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413ec0) on tqpair=0x13a95f0 00:23:53.238 [2024-10-11 12:00:55.817247] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.817251] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a95f0) 00:23:53.238 [2024-10-11 12:00:55.817258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.238 [2024-10-11 12:00:55.817274] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413ec0, cid 4, qid 0 00:23:53.238 [2024-10-11 12:00:55.817513] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.238 [2024-10-11 12:00:55.817519] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.238 [2024-10-11 12:00:55.817522] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.817526] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a95f0): datao=0, datal=8, cccid=4 00:23:53.238 [2024-10-11 12:00:55.817530] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1413ec0) on tqpair(0x13a95f0): expected_datao=0, payload_size=8 00:23:53.238 [2024-10-11 12:00:55.817535] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.817542] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.817545] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.862073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.238 [2024-10-11 12:00:55.862081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.238 [2024-10-11 12:00:55.862085] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.238 [2024-10-11 12:00:55.862089] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413ec0) on tqpair=0x13a95f0 00:23:53.238 ===================================================== 00:23:53.238 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:53.238 ===================================================== 00:23:53.238 Controller Capabilities/Features 00:23:53.238 ================================ 00:23:53.238 Vendor ID: 0000 00:23:53.238 Subsystem Vendor ID: 0000 00:23:53.238 Serial Number: .................... 00:23:53.238 Model Number: ........................................ 00:23:53.238 Firmware Version: 25.01 00:23:53.238 Recommended Arb Burst: 0 00:23:53.238 IEEE OUI Identifier: 00 00 00 00:23:53.238 Multi-path I/O 00:23:53.238 May have multiple subsystem ports: No 00:23:53.238 May have multiple controllers: No 00:23:53.238 Associated with SR-IOV VF: No 00:23:53.238 Max Data Transfer Size: 131072 00:23:53.238 Max Number of Namespaces: 0 00:23:53.238 Max Number of I/O Queues: 1024 00:23:53.238 NVMe Specification Version (VS): 1.3 00:23:53.238 NVMe Specification Version (Identify): 1.3 00:23:53.238 Maximum Queue Entries: 128 00:23:53.238 Contiguous Queues Required: Yes 00:23:53.238 Arbitration Mechanisms Supported 00:23:53.238 Weighted Round Robin: Not Supported 00:23:53.238 Vendor Specific: Not Supported 00:23:53.238 Reset Timeout: 15000 ms 00:23:53.238 Doorbell Stride: 4 bytes 00:23:53.238 NVM Subsystem Reset: Not Supported 00:23:53.238 Command Sets Supported 00:23:53.238 NVM Command Set: Supported 00:23:53.238 Boot Partition: Not Supported 00:23:53.238 Memory Page Size Minimum: 4096 bytes 00:23:53.238 Memory Page Size Maximum: 4096 bytes 00:23:53.238 Persistent Memory Region: Not Supported 00:23:53.238 Optional Asynchronous Events Supported 00:23:53.238 Namespace Attribute Notices: Not Supported 00:23:53.238 Firmware Activation Notices: Not Supported 00:23:53.238 ANA Change Notices: Not Supported 00:23:53.238 PLE Aggregate Log Change Notices: Not Supported 00:23:53.238 LBA Status Info Alert Notices: Not Supported 00:23:53.238 EGE Aggregate Log Change Notices: Not Supported 00:23:53.238 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.238 Zone Descriptor Change Notices: Not Supported 00:23:53.238 Discovery Log Change Notices: Supported 00:23:53.238 Controller Attributes 00:23:53.238 128-bit Host Identifier: Not Supported 00:23:53.238 Non-Operational Permissive Mode: Not Supported 00:23:53.238 NVM Sets: Not Supported 00:23:53.238 Read Recovery Levels: Not Supported 00:23:53.238 Endurance Groups: Not Supported 00:23:53.238 Predictable Latency Mode: Not Supported 00:23:53.238 Traffic Based Keep ALive: Not Supported 00:23:53.238 Namespace Granularity: Not Supported 00:23:53.238 SQ Associations: Not Supported 00:23:53.238 UUID List: Not Supported 00:23:53.238 Multi-Domain Subsystem: Not Supported 00:23:53.238 Fixed Capacity Management: Not Supported 00:23:53.238 Variable Capacity Management: Not Supported 00:23:53.238 Delete Endurance Group: Not Supported 00:23:53.238 Delete NVM Set: Not Supported 00:23:53.238 Extended LBA Formats Supported: Not Supported 00:23:53.238 Flexible Data Placement Supported: Not Supported 00:23:53.238 00:23:53.238 Controller Memory Buffer Support 00:23:53.238 ================================ 00:23:53.238 Supported: No 00:23:53.238 00:23:53.238 Persistent Memory Region Support 00:23:53.238 ================================ 00:23:53.238 Supported: No 00:23:53.238 00:23:53.238 Admin Command Set Attributes 00:23:53.238 ============================ 00:23:53.238 Security Send/Receive: Not Supported 00:23:53.238 Format NVM: Not Supported 00:23:53.238 Firmware Activate/Download: Not Supported 00:23:53.238 Namespace Management: Not Supported 00:23:53.238 Device Self-Test: Not Supported 00:23:53.238 Directives: Not Supported 00:23:53.238 NVMe-MI: Not Supported 00:23:53.238 Virtualization Management: Not Supported 00:23:53.238 Doorbell Buffer Config: Not Supported 00:23:53.238 Get LBA Status Capability: Not Supported 00:23:53.238 Command & Feature Lockdown Capability: Not Supported 00:23:53.238 Abort Command Limit: 1 00:23:53.238 Async Event Request Limit: 4 00:23:53.238 Number of Firmware Slots: N/A 00:23:53.238 Firmware Slot 1 Read-Only: N/A 00:23:53.239 Firmware Activation Without Reset: N/A 00:23:53.239 Multiple Update Detection Support: N/A 00:23:53.239 Firmware Update Granularity: No Information Provided 00:23:53.239 Per-Namespace SMART Log: No 00:23:53.239 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.239 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:53.239 Command Effects Log Page: Not Supported 00:23:53.239 Get Log Page Extended Data: Supported 00:23:53.239 Telemetry Log Pages: Not Supported 00:23:53.239 Persistent Event Log Pages: Not Supported 00:23:53.239 Supported Log Pages Log Page: May Support 00:23:53.239 Commands Supported & Effects Log Page: Not Supported 00:23:53.239 Feature Identifiers & Effects Log Page:May Support 00:23:53.239 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.239 Data Area 4 for Telemetry Log: Not Supported 00:23:53.239 Error Log Page Entries Supported: 128 00:23:53.239 Keep Alive: Not Supported 00:23:53.239 00:23:53.239 NVM Command Set Attributes 00:23:53.239 ========================== 00:23:53.239 Submission Queue Entry Size 00:23:53.239 Max: 1 00:23:53.239 Min: 1 00:23:53.239 Completion Queue Entry Size 00:23:53.239 Max: 1 00:23:53.239 Min: 1 00:23:53.239 Number of Namespaces: 0 00:23:53.239 Compare Command: Not Supported 00:23:53.239 Write Uncorrectable Command: Not Supported 00:23:53.239 Dataset Management Command: Not Supported 00:23:53.239 Write Zeroes Command: Not Supported 00:23:53.239 Set Features Save Field: Not Supported 00:23:53.239 Reservations: Not Supported 00:23:53.239 Timestamp: Not Supported 00:23:53.239 Copy: Not Supported 00:23:53.239 Volatile Write Cache: Not Present 00:23:53.239 Atomic Write Unit (Normal): 1 00:23:53.239 Atomic Write Unit (PFail): 1 00:23:53.239 Atomic Compare & Write Unit: 1 00:23:53.239 Fused Compare & Write: Supported 00:23:53.239 Scatter-Gather List 00:23:53.239 SGL Command Set: Supported 00:23:53.239 SGL Keyed: Supported 00:23:53.239 SGL Bit Bucket Descriptor: Not Supported 00:23:53.239 SGL Metadata Pointer: Not Supported 00:23:53.239 Oversized SGL: Not Supported 00:23:53.239 SGL Metadata Address: Not Supported 00:23:53.239 SGL Offset: Supported 00:23:53.239 Transport SGL Data Block: Not Supported 00:23:53.239 Replay Protected Memory Block: Not Supported 00:23:53.239 00:23:53.239 Firmware Slot Information 00:23:53.239 ========================= 00:23:53.239 Active slot: 0 00:23:53.239 00:23:53.239 00:23:53.239 Error Log 00:23:53.239 ========= 00:23:53.239 00:23:53.239 Active Namespaces 00:23:53.239 ================= 00:23:53.239 Discovery Log Page 00:23:53.239 ================== 00:23:53.239 Generation Counter: 2 00:23:53.239 Number of Records: 2 00:23:53.239 Record Format: 0 00:23:53.239 00:23:53.239 Discovery Log Entry 0 00:23:53.239 ---------------------- 00:23:53.239 Transport Type: 3 (TCP) 00:23:53.239 Address Family: 1 (IPv4) 00:23:53.239 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:53.239 Entry Flags: 00:23:53.239 Duplicate Returned Information: 1 00:23:53.239 Explicit Persistent Connection Support for Discovery: 1 00:23:53.239 Transport Requirements: 00:23:53.239 Secure Channel: Not Required 00:23:53.239 Port ID: 0 (0x0000) 00:23:53.239 Controller ID: 65535 (0xffff) 00:23:53.239 Admin Max SQ Size: 128 00:23:53.239 Transport Service Identifier: 4420 00:23:53.239 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:53.239 Transport Address: 10.0.0.2 00:23:53.239 Discovery Log Entry 1 00:23:53.239 ---------------------- 00:23:53.239 Transport Type: 3 (TCP) 00:23:53.239 Address Family: 1 (IPv4) 00:23:53.239 Subsystem Type: 2 (NVM Subsystem) 00:23:53.239 Entry Flags: 00:23:53.239 Duplicate Returned Information: 0 00:23:53.239 Explicit Persistent Connection Support for Discovery: 0 00:23:53.239 Transport Requirements: 00:23:53.239 Secure Channel: Not Required 00:23:53.239 Port ID: 0 (0x0000) 00:23:53.239 Controller ID: 65535 (0xffff) 00:23:53.239 Admin Max SQ Size: 128 00:23:53.239 Transport Service Identifier: 4420 00:23:53.239 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:53.239 Transport Address: 10.0.0.2 [2024-10-11 12:00:55.862200] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:53.239 [2024-10-11 12:00:55.862213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14138c0) on tqpair=0x13a95f0 00:23:53.239 [2024-10-11 12:00:55.862221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.239 [2024-10-11 12:00:55.862227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413a40) on tqpair=0x13a95f0 00:23:53.239 [2024-10-11 12:00:55.862232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.239 [2024-10-11 12:00:55.862239] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413bc0) on tqpair=0x13a95f0 00:23:53.239 [2024-10-11 12:00:55.862243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.239 [2024-10-11 12:00:55.862248] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.239 [2024-10-11 12:00:55.862253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.239 [2024-10-11 12:00:55.862264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.239 [2024-10-11 12:00:55.862268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.239 [2024-10-11 12:00:55.862271] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.239 [2024-10-11 12:00:55.862279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.239 [2024-10-11 12:00:55.862295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.239 [2024-10-11 12:00:55.862534] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.239 [2024-10-11 12:00:55.862540] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.239 [2024-10-11 12:00:55.862544] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.239 [2024-10-11 12:00:55.862547] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.239 [2024-10-11 12:00:55.862555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.239 [2024-10-11 12:00:55.862559] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.239 [2024-10-11 12:00:55.862562] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.239 [2024-10-11 12:00:55.862569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.239 [2024-10-11 12:00:55.862584] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.239 [2024-10-11 12:00:55.862826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.239 [2024-10-11 12:00:55.862832] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.239 [2024-10-11 12:00:55.862835] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.239 [2024-10-11 12:00:55.862839] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.239 [2024-10-11 12:00:55.862844] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:53.240 [2024-10-11 12:00:55.862852] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:53.240 [2024-10-11 12:00:55.862862] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.862866] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.862870] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.862877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.862887] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.863073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.863079] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.863083] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863087] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.863098] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863105] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863109] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.863115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.863126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.863320] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.863326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.863330] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.863344] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863347] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863351] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.863358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.863368] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.863585] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.863591] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.863594] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863598] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.863608] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863612] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863615] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.863622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.863632] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.863846] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.863852] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.863855] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863859] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.863869] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863873] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.863876] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.863883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.863893] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.864087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.864093] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.864097] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864101] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.864110] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864114] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864118] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.864129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.864140] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.864371] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.864377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.864380] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864384] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.864394] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864401] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.864408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.864418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.864623] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.864629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.864632] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864636] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.864646] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864650] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.864660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.864670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.864881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.864887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.864891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864895] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.864904] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864908] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.864912] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.864918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.864928] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.865147] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.865153] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.865157] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865161] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.865171] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865175] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865179] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.865188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.865199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.865389] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.865395] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.865398] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865402] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.865412] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865416] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865419] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.865426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.865436] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.865648] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.865654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.865658] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865662] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.865671] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.240 [2024-10-11 12:00:55.865685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.240 [2024-10-11 12:00:55.865695] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.240 [2024-10-11 12:00:55.865910] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.240 [2024-10-11 12:00:55.865916] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.240 [2024-10-11 12:00:55.865919] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865923] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.240 [2024-10-11 12:00:55.865933] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.240 [2024-10-11 12:00:55.865937] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.241 [2024-10-11 12:00:55.865940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a95f0) 00:23:53.241 [2024-10-11 12:00:55.865947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.241 [2024-10-11 12:00:55.865957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1413d40, cid 3, qid 0 00:23:53.241 [2024-10-11 12:00:55.870072] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.241 [2024-10-11 12:00:55.870080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.241 [2024-10-11 12:00:55.870084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.241 [2024-10-11 12:00:55.870088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1413d40) on tqpair=0x13a95f0 00:23:53.241 [2024-10-11 12:00:55.870096] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:53.241 00:23:53.241 12:00:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:53.241 [2024-10-11 12:00:55.917169] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:53.241 [2024-10-11 12:00:55.917213] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020059 ] 00:23:53.504 [2024-10-11 12:00:55.955146] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:53.504 [2024-10-11 12:00:55.955212] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:53.504 [2024-10-11 12:00:55.955218] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:53.504 [2024-10-11 12:00:55.955234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:53.504 [2024-10-11 12:00:55.955246] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:53.504 [2024-10-11 12:00:55.955910] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:53.504 [2024-10-11 12:00:55.955957] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe1b5f0 0 00:23:53.504 [2024-10-11 12:00:55.970080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:53.504 [2024-10-11 12:00:55.970098] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:53.504 [2024-10-11 12:00:55.970103] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:53.504 [2024-10-11 12:00:55.970107] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:53.504 [2024-10-11 12:00:55.970145] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.504 [2024-10-11 12:00:55.970151] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.504 [2024-10-11 12:00:55.970155] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.504 [2024-10-11 12:00:55.970171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:53.504 [2024-10-11 12:00:55.970193] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.504 [2024-10-11 12:00:55.978076] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.504 [2024-10-11 12:00:55.978089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.504 [2024-10-11 12:00:55.978092] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.504 [2024-10-11 12:00:55.978097] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.504 [2024-10-11 12:00:55.978108] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:53.504 [2024-10-11 12:00:55.978114] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:53.504 [2024-10-11 12:00:55.978120] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:53.504 [2024-10-11 12:00:55.978135] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.504 [2024-10-11 12:00:55.978139] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.504 [2024-10-11 12:00:55.978143] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.504 [2024-10-11 12:00:55.978152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.504 [2024-10-11 12:00:55.978168] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.504 [2024-10-11 12:00:55.978384] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.504 [2024-10-11 12:00:55.978392] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.504 [2024-10-11 12:00:55.978400] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978405] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.505 [2024-10-11 12:00:55.978410] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:53.505 [2024-10-11 12:00:55.978418] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:53.505 [2024-10-11 12:00:55.978425] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978433] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:55.978440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.505 [2024-10-11 12:00:55.978451] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.505 [2024-10-11 12:00:55.978646] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.505 [2024-10-11 12:00:55.978654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.505 [2024-10-11 12:00:55.978660] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978664] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.505 [2024-10-11 12:00:55.978670] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:53.505 [2024-10-11 12:00:55.978679] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:53.505 [2024-10-11 12:00:55.978686] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978689] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978693] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:55.978700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.505 [2024-10-11 12:00:55.978710] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.505 [2024-10-11 12:00:55.978916] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.505 [2024-10-11 12:00:55.978922] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.505 [2024-10-11 12:00:55.978926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.505 [2024-10-11 12:00:55.978935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:53.505 [2024-10-11 12:00:55.978945] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978949] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.978952] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:55.978959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.505 [2024-10-11 12:00:55.978969] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.505 [2024-10-11 12:00:55.979157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.505 [2024-10-11 12:00:55.979164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.505 [2024-10-11 12:00:55.979168] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.979172] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.505 [2024-10-11 12:00:55.979177] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:53.505 [2024-10-11 12:00:55.979185] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:53.505 [2024-10-11 12:00:55.979193] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:53.505 [2024-10-11 12:00:55.979299] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:53.505 [2024-10-11 12:00:55.979304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:53.505 [2024-10-11 12:00:55.979313] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.979317] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.979320] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:55.979327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.505 [2024-10-11 12:00:55.979338] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.505 [2024-10-11 12:00:55.979522] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.505 [2024-10-11 12:00:55.979533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.505 [2024-10-11 12:00:55.979540] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.979545] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.505 [2024-10-11 12:00:55.979553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:53.505 [2024-10-11 12:00:55.979564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.979571] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.979575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:55.979584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.505 [2024-10-11 12:00:55.979597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.505 [2024-10-11 12:00:55.979792] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.505 [2024-10-11 12:00:55.979799] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.505 [2024-10-11 12:00:55.979803] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.979807] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.505 [2024-10-11 12:00:55.979811] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:53.505 [2024-10-11 12:00:55.979816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:53.505 [2024-10-11 12:00:55.979824] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:53.505 [2024-10-11 12:00:55.979839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:53.505 [2024-10-11 12:00:55.979848] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.979852] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:55.979859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.505 [2024-10-11 12:00:55.979870] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.505 [2024-10-11 12:00:55.980122] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.505 [2024-10-11 12:00:55.980133] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.505 [2024-10-11 12:00:55.980136] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.980141] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1b5f0): datao=0, datal=4096, cccid=0 00:23:53.505 [2024-10-11 12:00:55.980146] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe858c0) on tqpair(0xe1b5f0): expected_datao=0, payload_size=4096 00:23:53.505 [2024-10-11 12:00:55.980150] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.980166] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:55.980170] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021224] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.505 [2024-10-11 12:00:56.021238] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.505 [2024-10-11 12:00:56.021242] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021246] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.505 [2024-10-11 12:00:56.021255] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:53.505 [2024-10-11 12:00:56.021260] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:53.505 [2024-10-11 12:00:56.021265] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:53.505 [2024-10-11 12:00:56.021269] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:53.505 [2024-10-11 12:00:56.021274] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:53.505 [2024-10-11 12:00:56.021279] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:53.505 [2024-10-11 12:00:56.021287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:53.505 [2024-10-11 12:00:56.021295] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021299] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021303] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:56.021311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:53.505 [2024-10-11 12:00:56.021324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.505 [2024-10-11 12:00:56.021547] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.505 [2024-10-11 12:00:56.021553] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.505 [2024-10-11 12:00:56.021557] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021561] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.505 [2024-10-11 12:00:56.021568] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021572] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021576] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:56.021582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.505 [2024-10-11 12:00:56.021589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021596] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:56.021602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.505 [2024-10-11 12:00:56.021613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021617] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.505 [2024-10-11 12:00:56.021620] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe1b5f0) 00:23:53.505 [2024-10-11 12:00:56.021626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.505 [2024-10-11 12:00:56.021632] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.021636] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.021640] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.506 [2024-10-11 12:00:56.021646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.506 [2024-10-11 12:00:56.021650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.021663] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.021670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.021674] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1b5f0) 00:23:53.506 [2024-10-11 12:00:56.021681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.506 [2024-10-11 12:00:56.021693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe858c0, cid 0, qid 0 00:23:53.506 [2024-10-11 12:00:56.021698] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85a40, cid 1, qid 0 00:23:53.506 [2024-10-11 12:00:56.021703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85bc0, cid 2, qid 0 00:23:53.506 [2024-10-11 12:00:56.021708] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.506 [2024-10-11 12:00:56.021713] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85ec0, cid 4, qid 0 00:23:53.506 [2024-10-11 12:00:56.021955] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.506 [2024-10-11 12:00:56.021961] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.506 [2024-10-11 12:00:56.021965] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.021969] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85ec0) on tqpair=0xe1b5f0 00:23:53.506 [2024-10-11 12:00:56.021974] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:53.506 [2024-10-11 12:00:56.021979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.021992] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.022001] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.022007] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.022011] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.022015] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1b5f0) 00:23:53.506 [2024-10-11 12:00:56.022022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:53.506 [2024-10-11 12:00:56.022032] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85ec0, cid 4, qid 0 00:23:53.506 [2024-10-11 12:00:56.026074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.506 [2024-10-11 12:00:56.026087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.506 [2024-10-11 12:00:56.026090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85ec0) on tqpair=0xe1b5f0 00:23:53.506 [2024-10-11 12:00:56.026167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.026179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.026187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1b5f0) 00:23:53.506 [2024-10-11 12:00:56.026197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.506 [2024-10-11 12:00:56.026210] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85ec0, cid 4, qid 0 00:23:53.506 [2024-10-11 12:00:56.026408] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.506 [2024-10-11 12:00:56.026415] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.506 [2024-10-11 12:00:56.026418] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026422] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1b5f0): datao=0, datal=4096, cccid=4 00:23:53.506 [2024-10-11 12:00:56.026427] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe85ec0) on tqpair(0xe1b5f0): expected_datao=0, payload_size=4096 00:23:53.506 [2024-10-11 12:00:56.026431] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026439] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026443] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026620] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.506 [2024-10-11 12:00:56.026627] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.506 [2024-10-11 12:00:56.026630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85ec0) on tqpair=0xe1b5f0 00:23:53.506 [2024-10-11 12:00:56.026645] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:53.506 [2024-10-11 12:00:56.026656] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.026666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.026673] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1b5f0) 00:23:53.506 [2024-10-11 12:00:56.026683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.506 [2024-10-11 12:00:56.026694] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85ec0, cid 4, qid 0 00:23:53.506 [2024-10-11 12:00:56.026934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.506 [2024-10-11 12:00:56.026941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.506 [2024-10-11 12:00:56.026944] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026948] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1b5f0): datao=0, datal=4096, cccid=4 00:23:53.506 [2024-10-11 12:00:56.026953] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe85ec0) on tqpair(0xe1b5f0): expected_datao=0, payload_size=4096 00:23:53.506 [2024-10-11 12:00:56.026959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026966] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.026970] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.506 [2024-10-11 12:00:56.027132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.506 [2024-10-11 12:00:56.027135] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027139] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85ec0) on tqpair=0xe1b5f0 00:23:53.506 [2024-10-11 12:00:56.027169] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1b5f0) 00:23:53.506 [2024-10-11 12:00:56.027197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.506 [2024-10-11 12:00:56.027209] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85ec0, cid 4, qid 0 00:23:53.506 [2024-10-11 12:00:56.027407] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.506 [2024-10-11 12:00:56.027414] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.506 [2024-10-11 12:00:56.027418] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027422] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1b5f0): datao=0, datal=4096, cccid=4 00:23:53.506 [2024-10-11 12:00:56.027426] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe85ec0) on tqpair(0xe1b5f0): expected_datao=0, payload_size=4096 00:23:53.506 [2024-10-11 12:00:56.027430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027437] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027441] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.506 [2024-10-11 12:00:56.027622] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.506 [2024-10-11 12:00:56.027625] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027629] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85ec0) on tqpair=0xe1b5f0 00:23:53.506 [2024-10-11 12:00:56.027637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027664] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027670] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027675] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027681] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:53.506 [2024-10-11 12:00:56.027686] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:53.506 [2024-10-11 12:00:56.027694] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:53.506 [2024-10-11 12:00:56.027714] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027718] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1b5f0) 00:23:53.506 [2024-10-11 12:00:56.027725] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.506 [2024-10-11 12:00:56.027732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027736] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.506 [2024-10-11 12:00:56.027740] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1b5f0) 00:23:53.506 [2024-10-11 12:00:56.027746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:53.506 [2024-10-11 12:00:56.027758] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85ec0, cid 4, qid 0 00:23:53.506 [2024-10-11 12:00:56.027763] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe86040, cid 5, qid 0 00:23:53.507 [2024-10-11 12:00:56.027997] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.028004] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.028007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85ec0) on tqpair=0xe1b5f0 00:23:53.507 [2024-10-11 12:00:56.028018] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.028024] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.028028] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028032] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe86040) on tqpair=0xe1b5f0 00:23:53.507 [2024-10-11 12:00:56.028041] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028045] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1b5f0) 00:23:53.507 [2024-10-11 12:00:56.028051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.507 [2024-10-11 12:00:56.028068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe86040, cid 5, qid 0 00:23:53.507 [2024-10-11 12:00:56.028266] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.028272] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.028276] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028279] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe86040) on tqpair=0xe1b5f0 00:23:53.507 [2024-10-11 12:00:56.028289] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028293] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1b5f0) 00:23:53.507 [2024-10-11 12:00:56.028299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.507 [2024-10-11 12:00:56.028310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe86040, cid 5, qid 0 00:23:53.507 [2024-10-11 12:00:56.028514] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.028520] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.028524] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028528] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe86040) on tqpair=0xe1b5f0 00:23:53.507 [2024-10-11 12:00:56.028537] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1b5f0) 00:23:53.507 [2024-10-11 12:00:56.028550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.507 [2024-10-11 12:00:56.028561] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe86040, cid 5, qid 0 00:23:53.507 [2024-10-11 12:00:56.028749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.028756] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.028759] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028763] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe86040) on tqpair=0xe1b5f0 00:23:53.507 [2024-10-11 12:00:56.028780] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028784] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe1b5f0) 00:23:53.507 [2024-10-11 12:00:56.028791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.507 [2024-10-11 12:00:56.028799] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028802] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe1b5f0) 00:23:53.507 [2024-10-11 12:00:56.028809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.507 [2024-10-11 12:00:56.028816] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028820] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe1b5f0) 00:23:53.507 [2024-10-11 12:00:56.028826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.507 [2024-10-11 12:00:56.028837] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.028841] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe1b5f0) 00:23:53.507 [2024-10-11 12:00:56.028847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.507 [2024-10-11 12:00:56.028859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe86040, cid 5, qid 0 00:23:53.507 [2024-10-11 12:00:56.028864] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85ec0, cid 4, qid 0 00:23:53.507 [2024-10-11 12:00:56.028869] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe861c0, cid 6, qid 0 00:23:53.507 [2024-10-11 12:00:56.028874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe86340, cid 7, qid 0 00:23:53.507 [2024-10-11 12:00:56.029199] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.507 [2024-10-11 12:00:56.029207] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.507 [2024-10-11 12:00:56.029211] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029215] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1b5f0): datao=0, datal=8192, cccid=5 00:23:53.507 [2024-10-11 12:00:56.029219] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe86040) on tqpair(0xe1b5f0): expected_datao=0, payload_size=8192 00:23:53.507 [2024-10-11 12:00:56.029224] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029309] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029314] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029320] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.507 [2024-10-11 12:00:56.029326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.507 [2024-10-11 12:00:56.029329] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029336] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1b5f0): datao=0, datal=512, cccid=4 00:23:53.507 [2024-10-11 12:00:56.029341] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe85ec0) on tqpair(0xe1b5f0): expected_datao=0, payload_size=512 00:23:53.507 [2024-10-11 12:00:56.029345] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029352] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029355] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.507 [2024-10-11 12:00:56.029367] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.507 [2024-10-11 12:00:56.029370] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029374] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1b5f0): datao=0, datal=512, cccid=6 00:23:53.507 [2024-10-11 12:00:56.029378] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe861c0) on tqpair(0xe1b5f0): expected_datao=0, payload_size=512 00:23:53.507 [2024-10-11 12:00:56.029382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029389] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029393] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029398] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:53.507 [2024-10-11 12:00:56.029404] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:53.507 [2024-10-11 12:00:56.029407] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029411] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe1b5f0): datao=0, datal=4096, cccid=7 00:23:53.507 [2024-10-11 12:00:56.029415] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe86340) on tqpair(0xe1b5f0): expected_datao=0, payload_size=4096 00:23:53.507 [2024-10-11 12:00:56.029420] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029437] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029441] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.029674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.029677] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029681] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe86040) on tqpair=0xe1b5f0 00:23:53.507 [2024-10-11 12:00:56.029695] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.029701] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.029705] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029709] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85ec0) on tqpair=0xe1b5f0 00:23:53.507 [2024-10-11 12:00:56.029720] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.029726] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.029730] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029734] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe861c0) on tqpair=0xe1b5f0 00:23:53.507 [2024-10-11 12:00:56.029741] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.507 [2024-10-11 12:00:56.029746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.507 [2024-10-11 12:00:56.029750] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.507 [2024-10-11 12:00:56.029754] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe86340) on tqpair=0xe1b5f0 00:23:53.507 ===================================================== 00:23:53.507 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:53.507 ===================================================== 00:23:53.507 Controller Capabilities/Features 00:23:53.507 ================================ 00:23:53.507 Vendor ID: 8086 00:23:53.507 Subsystem Vendor ID: 8086 00:23:53.507 Serial Number: SPDK00000000000001 00:23:53.507 Model Number: SPDK bdev Controller 00:23:53.507 Firmware Version: 25.01 00:23:53.507 Recommended Arb Burst: 6 00:23:53.507 IEEE OUI Identifier: e4 d2 5c 00:23:53.507 Multi-path I/O 00:23:53.507 May have multiple subsystem ports: Yes 00:23:53.507 May have multiple controllers: Yes 00:23:53.507 Associated with SR-IOV VF: No 00:23:53.507 Max Data Transfer Size: 131072 00:23:53.507 Max Number of Namespaces: 32 00:23:53.507 Max Number of I/O Queues: 127 00:23:53.507 NVMe Specification Version (VS): 1.3 00:23:53.507 NVMe Specification Version (Identify): 1.3 00:23:53.507 Maximum Queue Entries: 128 00:23:53.507 Contiguous Queues Required: Yes 00:23:53.507 Arbitration Mechanisms Supported 00:23:53.507 Weighted Round Robin: Not Supported 00:23:53.507 Vendor Specific: Not Supported 00:23:53.507 Reset Timeout: 15000 ms 00:23:53.507 Doorbell Stride: 4 bytes 00:23:53.507 NVM Subsystem Reset: Not Supported 00:23:53.507 Command Sets Supported 00:23:53.507 NVM Command Set: Supported 00:23:53.508 Boot Partition: Not Supported 00:23:53.508 Memory Page Size Minimum: 4096 bytes 00:23:53.508 Memory Page Size Maximum: 4096 bytes 00:23:53.508 Persistent Memory Region: Not Supported 00:23:53.508 Optional Asynchronous Events Supported 00:23:53.508 Namespace Attribute Notices: Supported 00:23:53.508 Firmware Activation Notices: Not Supported 00:23:53.508 ANA Change Notices: Not Supported 00:23:53.508 PLE Aggregate Log Change Notices: Not Supported 00:23:53.508 LBA Status Info Alert Notices: Not Supported 00:23:53.508 EGE Aggregate Log Change Notices: Not Supported 00:23:53.508 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.508 Zone Descriptor Change Notices: Not Supported 00:23:53.508 Discovery Log Change Notices: Not Supported 00:23:53.508 Controller Attributes 00:23:53.508 128-bit Host Identifier: Supported 00:23:53.508 Non-Operational Permissive Mode: Not Supported 00:23:53.508 NVM Sets: Not Supported 00:23:53.508 Read Recovery Levels: Not Supported 00:23:53.508 Endurance Groups: Not Supported 00:23:53.508 Predictable Latency Mode: Not Supported 00:23:53.508 Traffic Based Keep ALive: Not Supported 00:23:53.508 Namespace Granularity: Not Supported 00:23:53.508 SQ Associations: Not Supported 00:23:53.508 UUID List: Not Supported 00:23:53.508 Multi-Domain Subsystem: Not Supported 00:23:53.508 Fixed Capacity Management: Not Supported 00:23:53.508 Variable Capacity Management: Not Supported 00:23:53.508 Delete Endurance Group: Not Supported 00:23:53.508 Delete NVM Set: Not Supported 00:23:53.508 Extended LBA Formats Supported: Not Supported 00:23:53.508 Flexible Data Placement Supported: Not Supported 00:23:53.508 00:23:53.508 Controller Memory Buffer Support 00:23:53.508 ================================ 00:23:53.508 Supported: No 00:23:53.508 00:23:53.508 Persistent Memory Region Support 00:23:53.508 ================================ 00:23:53.508 Supported: No 00:23:53.508 00:23:53.508 Admin Command Set Attributes 00:23:53.508 ============================ 00:23:53.508 Security Send/Receive: Not Supported 00:23:53.508 Format NVM: Not Supported 00:23:53.508 Firmware Activate/Download: Not Supported 00:23:53.508 Namespace Management: Not Supported 00:23:53.508 Device Self-Test: Not Supported 00:23:53.508 Directives: Not Supported 00:23:53.508 NVMe-MI: Not Supported 00:23:53.508 Virtualization Management: Not Supported 00:23:53.508 Doorbell Buffer Config: Not Supported 00:23:53.508 Get LBA Status Capability: Not Supported 00:23:53.508 Command & Feature Lockdown Capability: Not Supported 00:23:53.508 Abort Command Limit: 4 00:23:53.508 Async Event Request Limit: 4 00:23:53.508 Number of Firmware Slots: N/A 00:23:53.508 Firmware Slot 1 Read-Only: N/A 00:23:53.508 Firmware Activation Without Reset: N/A 00:23:53.508 Multiple Update Detection Support: N/A 00:23:53.508 Firmware Update Granularity: No Information Provided 00:23:53.508 Per-Namespace SMART Log: No 00:23:53.508 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.508 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:53.508 Command Effects Log Page: Supported 00:23:53.508 Get Log Page Extended Data: Supported 00:23:53.508 Telemetry Log Pages: Not Supported 00:23:53.508 Persistent Event Log Pages: Not Supported 00:23:53.508 Supported Log Pages Log Page: May Support 00:23:53.508 Commands Supported & Effects Log Page: Not Supported 00:23:53.508 Feature Identifiers & Effects Log Page:May Support 00:23:53.508 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.508 Data Area 4 for Telemetry Log: Not Supported 00:23:53.508 Error Log Page Entries Supported: 128 00:23:53.508 Keep Alive: Supported 00:23:53.508 Keep Alive Granularity: 10000 ms 00:23:53.508 00:23:53.508 NVM Command Set Attributes 00:23:53.508 ========================== 00:23:53.508 Submission Queue Entry Size 00:23:53.508 Max: 64 00:23:53.508 Min: 64 00:23:53.508 Completion Queue Entry Size 00:23:53.508 Max: 16 00:23:53.508 Min: 16 00:23:53.508 Number of Namespaces: 32 00:23:53.508 Compare Command: Supported 00:23:53.508 Write Uncorrectable Command: Not Supported 00:23:53.508 Dataset Management Command: Supported 00:23:53.508 Write Zeroes Command: Supported 00:23:53.508 Set Features Save Field: Not Supported 00:23:53.508 Reservations: Supported 00:23:53.508 Timestamp: Not Supported 00:23:53.508 Copy: Supported 00:23:53.508 Volatile Write Cache: Present 00:23:53.508 Atomic Write Unit (Normal): 1 00:23:53.508 Atomic Write Unit (PFail): 1 00:23:53.508 Atomic Compare & Write Unit: 1 00:23:53.508 Fused Compare & Write: Supported 00:23:53.508 Scatter-Gather List 00:23:53.508 SGL Command Set: Supported 00:23:53.508 SGL Keyed: Supported 00:23:53.508 SGL Bit Bucket Descriptor: Not Supported 00:23:53.508 SGL Metadata Pointer: Not Supported 00:23:53.508 Oversized SGL: Not Supported 00:23:53.508 SGL Metadata Address: Not Supported 00:23:53.508 SGL Offset: Supported 00:23:53.508 Transport SGL Data Block: Not Supported 00:23:53.508 Replay Protected Memory Block: Not Supported 00:23:53.508 00:23:53.508 Firmware Slot Information 00:23:53.508 ========================= 00:23:53.508 Active slot: 1 00:23:53.508 Slot 1 Firmware Revision: 25.01 00:23:53.508 00:23:53.508 00:23:53.508 Commands Supported and Effects 00:23:53.508 ============================== 00:23:53.508 Admin Commands 00:23:53.508 -------------- 00:23:53.508 Get Log Page (02h): Supported 00:23:53.508 Identify (06h): Supported 00:23:53.508 Abort (08h): Supported 00:23:53.508 Set Features (09h): Supported 00:23:53.508 Get Features (0Ah): Supported 00:23:53.508 Asynchronous Event Request (0Ch): Supported 00:23:53.508 Keep Alive (18h): Supported 00:23:53.508 I/O Commands 00:23:53.508 ------------ 00:23:53.508 Flush (00h): Supported LBA-Change 00:23:53.508 Write (01h): Supported LBA-Change 00:23:53.508 Read (02h): Supported 00:23:53.508 Compare (05h): Supported 00:23:53.508 Write Zeroes (08h): Supported LBA-Change 00:23:53.508 Dataset Management (09h): Supported LBA-Change 00:23:53.508 Copy (19h): Supported LBA-Change 00:23:53.508 00:23:53.508 Error Log 00:23:53.508 ========= 00:23:53.508 00:23:53.508 Arbitration 00:23:53.508 =========== 00:23:53.508 Arbitration Burst: 1 00:23:53.508 00:23:53.508 Power Management 00:23:53.508 ================ 00:23:53.508 Number of Power States: 1 00:23:53.508 Current Power State: Power State #0 00:23:53.508 Power State #0: 00:23:53.508 Max Power: 0.00 W 00:23:53.508 Non-Operational State: Operational 00:23:53.508 Entry Latency: Not Reported 00:23:53.508 Exit Latency: Not Reported 00:23:53.508 Relative Read Throughput: 0 00:23:53.508 Relative Read Latency: 0 00:23:53.508 Relative Write Throughput: 0 00:23:53.508 Relative Write Latency: 0 00:23:53.508 Idle Power: Not Reported 00:23:53.508 Active Power: Not Reported 00:23:53.508 Non-Operational Permissive Mode: Not Supported 00:23:53.508 00:23:53.508 Health Information 00:23:53.508 ================== 00:23:53.508 Critical Warnings: 00:23:53.508 Available Spare Space: OK 00:23:53.508 Temperature: OK 00:23:53.508 Device Reliability: OK 00:23:53.508 Read Only: No 00:23:53.508 Volatile Memory Backup: OK 00:23:53.508 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:53.508 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:53.508 Available Spare: 0% 00:23:53.508 Available Spare Threshold: 0% 00:23:53.508 Life Percentage Used:[2024-10-11 12:00:56.029864] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.508 [2024-10-11 12:00:56.029870] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe1b5f0) 00:23:53.508 [2024-10-11 12:00:56.029878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.508 [2024-10-11 12:00:56.029890] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe86340, cid 7, qid 0 00:23:53.508 [2024-10-11 12:00:56.034077] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.508 [2024-10-11 12:00:56.034086] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.508 [2024-10-11 12:00:56.034090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.508 [2024-10-11 12:00:56.034094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe86340) on tqpair=0xe1b5f0 00:23:53.508 [2024-10-11 12:00:56.034137] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:53.508 [2024-10-11 12:00:56.034147] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe858c0) on tqpair=0xe1b5f0 00:23:53.508 [2024-10-11 12:00:56.034154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.508 [2024-10-11 12:00:56.034161] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85a40) on tqpair=0xe1b5f0 00:23:53.508 [2024-10-11 12:00:56.034165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.508 [2024-10-11 12:00:56.034171] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85bc0) on tqpair=0xe1b5f0 00:23:53.508 [2024-10-11 12:00:56.034175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.508 [2024-10-11 12:00:56.034180] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.508 [2024-10-11 12:00:56.034185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.508 [2024-10-11 12:00:56.034194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034198] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.034209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.034223] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.034427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.034434] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.034437] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.034448] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034456] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.034462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.034476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.034694] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.034701] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.034704] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034708] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.034713] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:53.509 [2024-10-11 12:00:56.034721] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:53.509 [2024-10-11 12:00:56.034731] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034735] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034739] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.034745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.034756] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.034967] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.034973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.034977] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034981] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.034991] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034995] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.034998] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.035005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.035015] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.035235] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.035242] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.035245] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035249] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.035260] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035264] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035267] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.035274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.035285] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.035488] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.035495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.035498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035502] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.035513] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035520] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.035527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.035537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.035720] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.035728] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.035731] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035735] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.035748] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035752] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035756] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.035763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.035773] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.035966] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.035973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.035976] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035980] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.035991] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035994] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.035998] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.036005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.036015] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.036204] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.036211] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.036214] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036218] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.036228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036232] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036236] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.036243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.036253] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.036470] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.036476] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.036480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.036494] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036497] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036501] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.036508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.036518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.036701] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.036707] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.036711] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036715] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.036725] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036732] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036736] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.509 [2024-10-11 12:00:56.036743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.509 [2024-10-11 12:00:56.036753] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.509 [2024-10-11 12:00:56.036960] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.509 [2024-10-11 12:00:56.036967] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.509 [2024-10-11 12:00:56.036970] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036974] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.509 [2024-10-11 12:00:56.036984] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.509 [2024-10-11 12:00:56.036992] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.510 [2024-10-11 12:00:56.036999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.510 [2024-10-11 12:00:56.037009] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.510 [2024-10-11 12:00:56.037202] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.510 [2024-10-11 12:00:56.037209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.510 [2024-10-11 12:00:56.037212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.510 [2024-10-11 12:00:56.037226] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037230] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037234] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.510 [2024-10-11 12:00:56.037241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.510 [2024-10-11 12:00:56.037251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.510 [2024-10-11 12:00:56.037472] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.510 [2024-10-11 12:00:56.037478] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.510 [2024-10-11 12:00:56.037482] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.510 [2024-10-11 12:00:56.037495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037503] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.510 [2024-10-11 12:00:56.037510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.510 [2024-10-11 12:00:56.037520] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.510 [2024-10-11 12:00:56.037696] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.510 [2024-10-11 12:00:56.037703] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.510 [2024-10-11 12:00:56.037708] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037712] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.510 [2024-10-11 12:00:56.037722] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037726] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037732] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.510 [2024-10-11 12:00:56.037739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.510 [2024-10-11 12:00:56.037750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.510 [2024-10-11 12:00:56.037953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.510 [2024-10-11 12:00:56.037960] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.510 [2024-10-11 12:00:56.037963] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037967] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.510 [2024-10-11 12:00:56.037977] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037981] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.037984] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe1b5f0) 00:23:53.510 [2024-10-11 12:00:56.037991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.510 [2024-10-11 12:00:56.038001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe85d40, cid 3, qid 0 00:23:53.510 [2024-10-11 12:00:56.042072] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:53.510 [2024-10-11 12:00:56.042082] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:53.510 [2024-10-11 12:00:56.042086] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:53.510 [2024-10-11 12:00:56.042090] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe85d40) on tqpair=0xe1b5f0 00:23:53.510 [2024-10-11 12:00:56.042098] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:53.510 0% 00:23:53.510 Data Units Read: 0 00:23:53.510 Data Units Written: 0 00:23:53.510 Host Read Commands: 0 00:23:53.510 Host Write Commands: 0 00:23:53.510 Controller Busy Time: 0 minutes 00:23:53.510 Power Cycles: 0 00:23:53.510 Power On Hours: 0 hours 00:23:53.510 Unsafe Shutdowns: 0 00:23:53.510 Unrecoverable Media Errors: 0 00:23:53.510 Lifetime Error Log Entries: 0 00:23:53.510 Warning Temperature Time: 0 minutes 00:23:53.510 Critical Temperature Time: 0 minutes 00:23:53.510 00:23:53.510 Number of Queues 00:23:53.510 ================ 00:23:53.510 Number of I/O Submission Queues: 127 00:23:53.510 Number of I/O Completion Queues: 127 00:23:53.510 00:23:53.510 Active Namespaces 00:23:53.510 ================= 00:23:53.510 Namespace ID:1 00:23:53.510 Error Recovery Timeout: Unlimited 00:23:53.510 Command Set Identifier: NVM (00h) 00:23:53.510 Deallocate: Supported 00:23:53.510 Deallocated/Unwritten Error: Not Supported 00:23:53.510 Deallocated Read Value: Unknown 00:23:53.510 Deallocate in Write Zeroes: Not Supported 00:23:53.510 Deallocated Guard Field: 0xFFFF 00:23:53.510 Flush: Supported 00:23:53.510 Reservation: Supported 00:23:53.510 Namespace Sharing Capabilities: Multiple Controllers 00:23:53.510 Size (in LBAs): 131072 (0GiB) 00:23:53.510 Capacity (in LBAs): 131072 (0GiB) 00:23:53.510 Utilization (in LBAs): 131072 (0GiB) 00:23:53.510 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:53.510 EUI64: ABCDEF0123456789 00:23:53.510 UUID: 535bc999-bc07-410c-894a-fff79d793bb7 00:23:53.510 Thin Provisioning: Not Supported 00:23:53.510 Per-NS Atomic Units: Yes 00:23:53.510 Atomic Boundary Size (Normal): 0 00:23:53.510 Atomic Boundary Size (PFail): 0 00:23:53.510 Atomic Boundary Offset: 0 00:23:53.510 Maximum Single Source Range Length: 65535 00:23:53.510 Maximum Copy Length: 65535 00:23:53.510 Maximum Source Range Count: 1 00:23:53.510 NGUID/EUI64 Never Reused: No 00:23:53.510 Namespace Write Protected: No 00:23:53.510 Number of LBA Formats: 1 00:23:53.510 Current LBA Format: LBA Format #00 00:23:53.510 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.510 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.510 rmmod nvme_tcp 00:23:53.510 rmmod nvme_fabrics 00:23:53.510 rmmod nvme_keyring 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 2019874 ']' 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 2019874 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2019874 ']' 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2019874 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.510 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2019874 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2019874' 00:23:53.771 killing process with pid 2019874 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2019874 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2019874 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.771 12:00:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.318 00:23:56.318 real 0m11.899s 00:23:56.318 user 0m8.788s 00:23:56.318 sys 0m6.374s 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:56.318 ************************************ 00:23:56.318 END TEST nvmf_identify 00:23:56.318 ************************************ 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.318 ************************************ 00:23:56.318 START TEST nvmf_perf 00:23:56.318 ************************************ 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:56.318 * Looking for test storage... 00:23:56.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:56.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.318 --rc genhtml_branch_coverage=1 00:23:56.318 --rc genhtml_function_coverage=1 00:23:56.318 --rc genhtml_legend=1 00:23:56.318 --rc geninfo_all_blocks=1 00:23:56.318 --rc geninfo_unexecuted_blocks=1 00:23:56.318 00:23:56.318 ' 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:56.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.318 --rc genhtml_branch_coverage=1 00:23:56.318 --rc genhtml_function_coverage=1 00:23:56.318 --rc genhtml_legend=1 00:23:56.318 --rc geninfo_all_blocks=1 00:23:56.318 --rc geninfo_unexecuted_blocks=1 00:23:56.318 00:23:56.318 ' 00:23:56.318 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:56.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.318 --rc genhtml_branch_coverage=1 00:23:56.318 --rc genhtml_function_coverage=1 00:23:56.318 --rc genhtml_legend=1 00:23:56.319 --rc geninfo_all_blocks=1 00:23:56.319 --rc geninfo_unexecuted_blocks=1 00:23:56.319 00:23:56.319 ' 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:56.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.319 --rc genhtml_branch_coverage=1 00:23:56.319 --rc genhtml_function_coverage=1 00:23:56.319 --rc genhtml_legend=1 00:23:56.319 --rc geninfo_all_blocks=1 00:23:56.319 --rc geninfo_unexecuted_blocks=1 00:23:56.319 00:23:56.319 ' 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.319 12:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.460 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:04.461 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:04.461 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:04.461 Found net devices under 0000:31:00.0: cvl_0_0 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:04.461 Found net devices under 0000:31:00.1: cvl_0_1 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:24:04.461 00:24:04.461 --- 10.0.0.2 ping statistics --- 00:24:04.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.461 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:24:04.461 00:24:04.461 --- 10.0.0.1 ping statistics --- 00:24:04.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.461 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=2024439 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 2024439 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2024439 ']' 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.461 12:01:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.461 [2024-10-11 12:01:06.563110] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:24:04.461 [2024-10-11 12:01:06.563173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.462 [2024-10-11 12:01:06.652048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.462 [2024-10-11 12:01:06.705766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.462 [2024-10-11 12:01:06.705817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.462 [2024-10-11 12:01:06.705826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.462 [2024-10-11 12:01:06.705833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.462 [2024-10-11 12:01:06.705839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.462 [2024-10-11 12:01:06.708237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.462 [2024-10-11 12:01:06.708395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.462 [2024-10-11 12:01:06.708559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.462 [2024-10-11 12:01:06.708559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.723 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.723 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:04.723 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:04.723 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:04.723 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:04.984 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.984 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:04.984 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:05.557 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:05.557 12:01:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:05.557 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:05.557 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:05.817 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:05.817 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:05.817 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:05.817 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:05.817 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:06.079 [2024-10-11 12:01:08.550090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.079 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.340 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:06.340 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.340 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:06.340 12:01:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:06.601 12:01:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.862 [2024-10-11 12:01:09.357681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.862 12:01:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:07.122 12:01:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:07.122 12:01:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:07.122 12:01:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:07.122 12:01:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:08.505 Initializing NVMe Controllers 00:24:08.505 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:08.505 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:08.505 Initialization complete. Launching workers. 00:24:08.505 ======================================================== 00:24:08.505 Latency(us) 00:24:08.505 Device Information : IOPS MiB/s Average min max 00:24:08.505 PCIE (0000:65:00.0) NSID 1 from core 0: 77630.01 303.24 411.35 13.38 4963.14 00:24:08.505 ======================================================== 00:24:08.505 Total : 77630.01 303.24 411.35 13.38 4963.14 00:24:08.505 00:24:08.505 12:01:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:09.546 Initializing NVMe Controllers 00:24:09.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.546 Initialization complete. Launching workers. 00:24:09.546 ======================================================== 00:24:09.546 Latency(us) 00:24:09.546 Device Information : IOPS MiB/s Average min max 00:24:09.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 120.00 0.47 8515.58 249.42 46220.33 00:24:09.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24502.57 7954.52 47904.31 00:24:09.546 ======================================================== 00:24:09.546 Total : 161.00 0.63 12586.80 249.42 47904.31 00:24:09.546 00:24:09.546 12:01:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:10.931 Initializing NVMe Controllers 00:24:10.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:10.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:10.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:10.931 Initialization complete. Launching workers. 00:24:10.931 ======================================================== 00:24:10.931 Latency(us) 00:24:10.931 Device Information : IOPS MiB/s Average min max 00:24:10.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11587.66 45.26 2764.04 504.59 7392.81 00:24:10.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3777.65 14.76 8514.96 5594.52 15944.03 00:24:10.931 ======================================================== 00:24:10.931 Total : 15365.31 60.02 4177.94 504.59 15944.03 00:24:10.931 00:24:10.931 12:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:10.931 12:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:10.931 12:01:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:14.233 Initializing NVMe Controllers 00:24:14.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.233 Controller IO queue size 128, less than required. 00:24:14.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.233 Controller IO queue size 128, less than required. 00:24:14.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:14.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:14.233 Initialization complete. Launching workers. 00:24:14.233 ======================================================== 00:24:14.233 Latency(us) 00:24:14.233 Device Information : IOPS MiB/s Average min max 00:24:14.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2374.49 593.62 54399.53 31442.62 95694.05 00:24:14.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.00 151.25 224446.19 47185.91 353479.88 00:24:14.233 ======================================================== 00:24:14.233 Total : 2979.49 744.87 88928.22 31442.62 353479.88 00:24:14.233 00:24:14.233 12:01:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:14.233 No valid NVMe controllers or AIO or URING devices found 00:24:14.233 Initializing NVMe Controllers 00:24:14.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:14.233 Controller IO queue size 128, less than required. 00:24:14.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.233 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:14.233 Controller IO queue size 128, less than required. 00:24:14.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:14.233 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:14.233 WARNING: Some requested NVMe devices were skipped 00:24:14.233 12:01:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:16.144 Initializing NVMe Controllers 00:24:16.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.144 Controller IO queue size 128, less than required. 00:24:16.144 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:16.144 Controller IO queue size 128, less than required. 00:24:16.144 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:16.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:16.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:16.144 Initialization complete. Launching workers. 00:24:16.144 00:24:16.144 ==================== 00:24:16.144 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:16.144 TCP transport: 00:24:16.144 polls: 43060 00:24:16.144 idle_polls: 26625 00:24:16.145 sock_completions: 16435 00:24:16.145 nvme_completions: 7255 00:24:16.145 submitted_requests: 10780 00:24:16.145 queued_requests: 1 00:24:16.145 00:24:16.145 ==================== 00:24:16.145 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:16.145 TCP transport: 00:24:16.145 polls: 39247 00:24:16.145 idle_polls: 23867 00:24:16.145 sock_completions: 15380 00:24:16.145 nvme_completions: 7475 00:24:16.145 submitted_requests: 11180 00:24:16.145 queued_requests: 1 00:24:16.145 ======================================================== 00:24:16.145 Latency(us) 00:24:16.145 Device Information : IOPS MiB/s Average min max 00:24:16.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1813.45 453.36 71830.96 38469.14 132819.74 00:24:16.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1868.45 467.11 69147.82 23760.79 123106.69 00:24:16.145 ======================================================== 00:24:16.145 Total : 3681.90 920.47 70469.35 23760.79 132819.74 00:24:16.145 00:24:16.145 12:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:16.145 12:01:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.405 rmmod nvme_tcp 00:24:16.405 rmmod nvme_fabrics 00:24:16.405 rmmod nvme_keyring 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 2024439 ']' 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 2024439 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2024439 ']' 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2024439 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.405 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2024439 00:24:16.665 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.665 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.665 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2024439' 00:24:16.665 killing process with pid 2024439 00:24:16.665 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2024439 00:24:16.665 12:01:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2024439 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.577 12:01:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.488 12:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.488 00:24:20.488 real 0m24.621s 00:24:20.488 user 0m58.976s 00:24:20.488 sys 0m8.800s 00:24:20.488 12:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.488 12:01:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.488 ************************************ 00:24:20.488 END TEST nvmf_perf 00:24:20.488 ************************************ 00:24:20.748 12:01:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.749 ************************************ 00:24:20.749 START TEST nvmf_fio_host 00:24:20.749 ************************************ 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:20.749 * Looking for test storage... 00:24:20.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.749 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:21.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.010 --rc genhtml_branch_coverage=1 00:24:21.010 --rc genhtml_function_coverage=1 00:24:21.010 --rc genhtml_legend=1 00:24:21.010 --rc geninfo_all_blocks=1 00:24:21.010 --rc geninfo_unexecuted_blocks=1 00:24:21.010 00:24:21.010 ' 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:21.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.010 --rc genhtml_branch_coverage=1 00:24:21.010 --rc genhtml_function_coverage=1 00:24:21.010 --rc genhtml_legend=1 00:24:21.010 --rc geninfo_all_blocks=1 00:24:21.010 --rc geninfo_unexecuted_blocks=1 00:24:21.010 00:24:21.010 ' 00:24:21.010 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:21.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.010 --rc genhtml_branch_coverage=1 00:24:21.010 --rc genhtml_function_coverage=1 00:24:21.011 --rc genhtml_legend=1 00:24:21.011 --rc geninfo_all_blocks=1 00:24:21.011 --rc geninfo_unexecuted_blocks=1 00:24:21.011 00:24:21.011 ' 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:21.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.011 --rc genhtml_branch_coverage=1 00:24:21.011 --rc genhtml_function_coverage=1 00:24:21.011 --rc genhtml_legend=1 00:24:21.011 --rc geninfo_all_blocks=1 00:24:21.011 --rc geninfo_unexecuted_blocks=1 00:24:21.011 00:24:21.011 ' 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:21.011 12:01:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:29.156 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.156 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:29.157 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:29.157 Found net devices under 0000:31:00.0: cvl_0_0 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:29.157 Found net devices under 0000:31:00.1: cvl_0_1 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.157 12:01:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:29.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:24:29.157 00:24:29.157 --- 10.0.0.2 ping statistics --- 00:24:29.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.157 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:24:29.157 00:24:29.157 --- 10.0.0.1 ping statistics --- 00:24:29.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.157 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2031573 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2031573 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2031573 ']' 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.157 12:01:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.157 [2024-10-11 12:01:31.294028] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:24:29.157 [2024-10-11 12:01:31.294098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.157 [2024-10-11 12:01:31.382196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.157 [2024-10-11 12:01:31.436907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.157 [2024-10-11 12:01:31.436957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.157 [2024-10-11 12:01:31.436966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.157 [2024-10-11 12:01:31.436973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.157 [2024-10-11 12:01:31.436979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.157 [2024-10-11 12:01:31.439140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.157 [2024-10-11 12:01:31.439379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.157 [2024-10-11 12:01:31.439500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.157 [2024-10-11 12:01:31.439502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.419 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.419 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:29.419 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:29.681 [2024-10-11 12:01:32.288485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.681 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:29.681 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:29.681 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.681 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:29.942 Malloc1 00:24:29.942 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.202 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:30.463 12:01:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.463 [2024-10-11 12:01:33.154180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:30.723 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:30.996 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:30.996 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:30.996 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:30.996 12:01:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:31.257 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:31.257 fio-3.35 00:24:31.257 Starting 1 thread 00:24:33.800 [2024-10-11 12:01:36.043147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24538c0 is same with the state(6) to be set 00:24:33.800 [2024-10-11 12:01:36.043195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24538c0 is same with the state(6) to be set 00:24:33.800 [2024-10-11 12:01:36.043201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24538c0 is same with the state(6) to be set 00:24:33.800 [2024-10-11 12:01:36.043206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24538c0 is same with the state(6) to be set 00:24:33.800 [2024-10-11 12:01:36.043211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24538c0 is same with the state(6) to be set 00:24:33.800 [2024-10-11 12:01:36.044159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24533f0 is same with the state(6) to be set 00:24:33.800 00:24:33.800 test: (groupid=0, jobs=1): err= 0: pid=2032416: Fri Oct 11 12:01:36 2024 00:24:33.800 read: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(108MiB/2004msec) 00:24:33.800 slat (usec): min=2, max=293, avg= 2.15, stdev= 2.48 00:24:33.800 clat (usec): min=3179, max=9095, avg=5081.19, stdev=361.87 00:24:33.800 lat (usec): min=3181, max=9097, avg=5083.34, stdev=361.93 00:24:33.800 clat percentiles (usec): 00:24:33.800 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:33.800 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:24:33.800 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:24:33.800 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 7242], 99.95th=[ 8455], 00:24:33.800 | 99.99th=[ 8979] 00:24:33.800 bw ( KiB/s): min=53920, max=55824, per=99.97%, avg=55268.00, stdev=902.98, samples=4 00:24:33.800 iops : min=13480, max=13956, avg=13817.00, stdev=225.75, samples=4 00:24:33.800 write: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(108MiB/2004msec); 0 zone resets 00:24:33.800 slat (usec): min=2, max=273, avg= 2.22, stdev= 1.80 00:24:33.800 clat (usec): min=2600, max=7492, avg=4118.76, stdev=299.94 00:24:33.800 lat (usec): min=2603, max=7494, avg=4120.98, stdev=300.06 00:24:33.800 clat percentiles (usec): 00:24:33.800 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:24:33.800 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:33.800 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:24:33.800 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 6063], 99.95th=[ 6652], 00:24:33.800 | 99.99th=[ 7373] 00:24:33.800 bw ( KiB/s): min=54256, max=55744, per=99.97%, avg=55244.00, stdev=677.74, samples=4 00:24:33.800 iops : min=13564, max=13936, avg=13811.00, stdev=169.43, samples=4 00:24:33.800 lat (msec) : 4=16.45%, 10=83.55% 00:24:33.800 cpu : usr=74.99%, sys=23.86%, ctx=23, majf=0, minf=17 00:24:33.800 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:33.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:33.800 issued rwts: total=27697,27686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:33.800 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:33.800 00:24:33.800 Run status group 0 (all jobs): 00:24:33.800 READ: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:33.800 WRITE: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:33.800 12:01:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:33.800 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:33.800 fio-3.35 00:24:33.800 Starting 1 thread 00:24:36.343 00:24:36.343 test: (groupid=0, jobs=1): err= 0: pid=2032930: Fri Oct 11 12:01:38 2024 00:24:36.343 read: IOPS=9637, BW=151MiB/s (158MB/s)(302MiB/2004msec) 00:24:36.343 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.59 00:24:36.343 clat (usec): min=1580, max=14958, avg=8065.57, stdev=1948.59 00:24:36.343 lat (usec): min=1584, max=14961, avg=8069.18, stdev=1948.71 00:24:36.343 clat percentiles (usec): 00:24:36.343 | 1.00th=[ 3982], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6325], 00:24:36.343 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8586], 00:24:36.343 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10552], 95.00th=[11076], 00:24:36.343 | 99.00th=[12387], 99.50th=[12911], 99.90th=[13960], 99.95th=[14484], 00:24:36.343 | 99.99th=[14877] 00:24:36.343 bw ( KiB/s): min=66208, max=84768, per=49.37%, avg=76120.00, stdev=7612.25, samples=4 00:24:36.343 iops : min= 4138, max= 5298, avg=4757.50, stdev=475.77, samples=4 00:24:36.343 write: IOPS=5873, BW=91.8MiB/s (96.2MB/s)(156MiB/1702msec); 0 zone resets 00:24:36.343 slat (usec): min=39, max=360, avg=40.84, stdev= 6.83 00:24:36.343 clat (usec): min=1491, max=14024, avg=8896.10, stdev=1271.95 00:24:36.343 lat (usec): min=1531, max=14129, avg=8936.94, stdev=1273.40 00:24:36.343 clat percentiles (usec): 00:24:36.343 | 1.00th=[ 6325], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7767], 00:24:36.343 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:24:36.343 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[10945], 00:24:36.343 | 99.00th=[12125], 99.50th=[12780], 99.90th=[13829], 99.95th=[13960], 00:24:36.343 | 99.99th=[13960] 00:24:36.343 bw ( KiB/s): min=69120, max=88256, per=84.44%, avg=79360.00, stdev=7869.83, samples=4 00:24:36.343 iops : min= 4320, max= 5516, avg=4960.00, stdev=491.86, samples=4 00:24:36.343 lat (msec) : 2=0.03%, 4=0.70%, 10=79.50%, 20=19.77% 00:24:36.343 cpu : usr=86.27%, sys=12.53%, ctx=15, majf=0, minf=39 00:24:36.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:36.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:36.343 issued rwts: total=19313,9997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:36.343 00:24:36.343 Run status group 0 (all jobs): 00:24:36.343 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=302MiB (316MB), run=2004-2004msec 00:24:36.343 WRITE: bw=91.8MiB/s (96.2MB/s), 91.8MiB/s-91.8MiB/s (96.2MB/s-96.2MB/s), io=156MiB (164MB), run=1702-1702msec 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.343 12:01:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.343 rmmod nvme_tcp 00:24:36.343 rmmod nvme_fabrics 00:24:36.343 rmmod nvme_keyring 00:24:36.343 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 2031573 ']' 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 2031573 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2031573 ']' 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2031573 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2031573 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2031573' 00:24:36.604 killing process with pid 2031573 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2031573 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2031573 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.604 12:01:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:39.146 00:24:39.146 real 0m18.049s 00:24:39.146 user 1m2.497s 00:24:39.146 sys 0m8.049s 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.146 ************************************ 00:24:39.146 END TEST nvmf_fio_host 00:24:39.146 ************************************ 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.146 ************************************ 00:24:39.146 START TEST nvmf_failover 00:24:39.146 ************************************ 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:39.146 * Looking for test storage... 00:24:39.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:39.146 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.147 --rc genhtml_branch_coverage=1 00:24:39.147 --rc genhtml_function_coverage=1 00:24:39.147 --rc genhtml_legend=1 00:24:39.147 --rc geninfo_all_blocks=1 00:24:39.147 --rc geninfo_unexecuted_blocks=1 00:24:39.147 00:24:39.147 ' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.147 --rc genhtml_branch_coverage=1 00:24:39.147 --rc genhtml_function_coverage=1 00:24:39.147 --rc genhtml_legend=1 00:24:39.147 --rc geninfo_all_blocks=1 00:24:39.147 --rc geninfo_unexecuted_blocks=1 00:24:39.147 00:24:39.147 ' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.147 --rc genhtml_branch_coverage=1 00:24:39.147 --rc genhtml_function_coverage=1 00:24:39.147 --rc genhtml_legend=1 00:24:39.147 --rc geninfo_all_blocks=1 00:24:39.147 --rc geninfo_unexecuted_blocks=1 00:24:39.147 00:24:39.147 ' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:39.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.147 --rc genhtml_branch_coverage=1 00:24:39.147 --rc genhtml_function_coverage=1 00:24:39.147 --rc genhtml_legend=1 00:24:39.147 --rc geninfo_all_blocks=1 00:24:39.147 --rc geninfo_unexecuted_blocks=1 00:24:39.147 00:24:39.147 ' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.147 12:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:47.295 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:47.296 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:47.296 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:47.296 Found net devices under 0000:31:00.0: cvl_0_0 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:47.296 Found net devices under 0000:31:00.1: cvl_0_1 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.296 12:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:24:47.296 00:24:47.296 --- 10.0.0.2 ping statistics --- 00:24:47.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.296 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:47.296 00:24:47.296 --- 10.0.0.1 ping statistics --- 00:24:47.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.296 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=2037699 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 2037699 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2037699 ']' 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.296 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.297 12:01:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.297 [2024-10-11 12:01:49.366295] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:24:47.297 [2024-10-11 12:01:49.366358] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.297 [2024-10-11 12:01:49.456160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:47.297 [2024-10-11 12:01:49.508713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.297 [2024-10-11 12:01:49.508764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.297 [2024-10-11 12:01:49.508772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.297 [2024-10-11 12:01:49.508779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.297 [2024-10-11 12:01:49.508786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.297 [2024-10-11 12:01:49.510809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.297 [2024-10-11 12:01:49.510967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.297 [2024-10-11 12:01:49.510967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.557 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.557 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:47.557 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:47.557 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.557 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.557 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.557 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:47.818 [2024-10-11 12:01:50.390366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.818 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:48.078 Malloc0 00:24:48.078 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:48.339 12:01:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.339 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.600 [2024-10-11 12:01:51.197208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.600 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:48.861 [2024-10-11 12:01:51.397810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.861 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:49.123 [2024-10-11 12:01:51.602503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2038281 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2038281 /var/tmp/bdevperf.sock 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2038281 ']' 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:49.123 12:01:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.064 12:01:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.064 12:01:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:50.064 12:01:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:50.325 NVMe0n1 00:24:50.325 12:01:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:50.585 00:24:50.585 12:01:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2038549 00:24:50.585 12:01:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.585 12:01:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:51.969 12:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.969 [2024-10-11 12:01:54.416224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.969 [2024-10-11 12:01:54.416395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 [2024-10-11 12:01:54.416620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d850 is same with the state(6) to be set 00:24:51.970 12:01:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:55.273 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:55.273 00:24:55.273 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:55.273 [2024-10-11 12:01:57.874452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.273 [2024-10-11 12:01:57.874618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 [2024-10-11 12:01:57.874941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e600 is same with the state(6) to be set 00:24:55.274 12:01:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:58.574 12:02:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.574 [2024-10-11 12:02:01.066105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.574 12:02:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:59.516 12:02:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:59.777 [2024-10-11 12:02:02.258695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.258999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 [2024-10-11 12:02:02.259128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f570 is same with the state(6) to be set 00:24:59.777 12:02:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2038549 00:25:06.365 { 00:25:06.365 "results": [ 00:25:06.365 { 00:25:06.365 "job": "NVMe0n1", 00:25:06.365 "core_mask": "0x1", 00:25:06.365 "workload": "verify", 00:25:06.365 "status": "finished", 00:25:06.365 "verify_range": { 00:25:06.365 "start": 0, 00:25:06.365 "length": 16384 00:25:06.366 }, 00:25:06.366 "queue_depth": 128, 00:25:06.366 "io_size": 4096, 00:25:06.366 "runtime": 15.008523, 00:25:06.366 "iops": 12375.83471738025, 00:25:06.366 "mibps": 48.3431043647666, 00:25:06.366 "io_failed": 8181, 00:25:06.366 "io_timeout": 0, 00:25:06.366 "avg_latency_us": 9885.333071512552, 00:25:06.366 "min_latency_us": 549.5466666666666, 00:25:06.366 "max_latency_us": 20097.706666666665 00:25:06.366 } 00:25:06.366 ], 00:25:06.366 "core_count": 1 00:25:06.366 } 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2038281 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2038281 ']' 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2038281 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2038281 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2038281' 00:25:06.366 killing process with pid 2038281 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2038281 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2038281 00:25:06.366 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.366 [2024-10-11 12:01:51.690618] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:06.366 [2024-10-11 12:01:51.690703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038281 ] 00:25:06.366 [2024-10-11 12:01:51.776458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.366 [2024-10-11 12:01:51.825850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.366 Running I/O for 15 seconds... 00:25:06.366 11560.00 IOPS, 45.16 MiB/s [2024-10-11T10:02:09.069Z] [2024-10-11 12:01:54.417442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.417988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.417998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.366 [2024-10-11 12:01:54.418337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.366 [2024-10-11 12:01:54.418347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.367 [2024-10-11 12:01:54.418863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.418880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.418897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.418914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.418931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.418947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.418966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.418983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.418992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.419000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.419009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.419016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.367 [2024-10-11 12:01:54.419026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.367 [2024-10-11 12:01:54.419033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.368 [2024-10-11 12:01:54.419626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.368 [2024-10-11 12:01:54.419643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.368 [2024-10-11 12:01:54.419674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.368 [2024-10-11 12:01:54.419681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99784 len:8 PRP1 0x0 PRP2 0x0 00:25:06.368 [2024-10-11 12:01:54.419689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419726] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x98d2a0 was disconnected and freed. reset controller. 00:25:06.368 [2024-10-11 12:01:54.419736] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:06.368 [2024-10-11 12:01:54.419757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.368 [2024-10-11 12:01:54.419765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.368 [2024-10-11 12:01:54.419781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.368 [2024-10-11 12:01:54.419789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.368 [2024-10-11 12:01:54.419797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:54.419804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.369 [2024-10-11 12:01:54.419812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:54.419819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.369 [2024-10-11 12:01:54.419847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c280 (9): Bad file descriptor 00:25:06.369 [2024-10-11 12:01:54.423374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.369 [2024-10-11 12:01:54.461843] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.369 11262.50 IOPS, 43.99 MiB/s [2024-10-11T10:02:09.072Z] 11197.00 IOPS, 43.74 MiB/s [2024-10-11T10:02:09.072Z] 11598.00 IOPS, 45.30 MiB/s [2024-10-11T10:02:09.072Z] [2024-10-11 12:01:57.876280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.369 [2024-10-11 12:01:57.876677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.369 [2024-10-11 12:01:57.876682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-10-11 12:01:57.876694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-10-11 12:01:57.876706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.370 [2024-10-11 12:01:57.876718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.876990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.876995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.370 [2024-10-11 12:01:57.877169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.370 [2024-10-11 12:01:57.877174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.371 [2024-10-11 12:01:57.877614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.371 [2024-10-11 12:01:57.877634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43256 len:8 PRP1 0x0 PRP2 0x0 00:25:06.371 [2024-10-11 12:01:57.877639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.371 [2024-10-11 12:01:57.877651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.371 [2024-10-11 12:01:57.877655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43264 len:8 PRP1 0x0 PRP2 0x0 00:25:06.371 [2024-10-11 12:01:57.877660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.371 [2024-10-11 12:01:57.877665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.371 [2024-10-11 12:01:57.877671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.371 [2024-10-11 12:01:57.877675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43272 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43280 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43288 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43296 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43304 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43312 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43320 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43328 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43336 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43344 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43352 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.877869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.877873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.877877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43360 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.877882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.890174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.890201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.890212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43368 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.890221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.890228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.372 [2024-10-11 12:01:57.890233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.372 [2024-10-11 12:01:57.890241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43376 len:8 PRP1 0x0 PRP2 0x0 00:25:06.372 [2024-10-11 12:01:57.890248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.890287] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x98f230 was disconnected and freed. reset controller. 00:25:06.372 [2024-10-11 12:01:57.890297] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:06.372 [2024-10-11 12:01:57.890323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.372 [2024-10-11 12:01:57.890332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.890341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.372 [2024-10-11 12:01:57.890353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.890361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.372 [2024-10-11 12:01:57.890368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.890375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.372 [2024-10-11 12:01:57.890382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:01:57.890389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.372 [2024-10-11 12:01:57.890429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c280 (9): Bad file descriptor 00:25:06.372 [2024-10-11 12:01:57.893670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.372 [2024-10-11 12:01:57.968868] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.372 11650.20 IOPS, 45.51 MiB/s [2024-10-11T10:02:09.075Z] 11851.17 IOPS, 46.29 MiB/s [2024-10-11T10:02:09.075Z] 11974.29 IOPS, 46.77 MiB/s [2024-10-11T10:02:09.075Z] 12078.25 IOPS, 47.18 MiB/s [2024-10-11T10:02:09.075Z] [2024-10-11 12:02:02.259584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.372 [2024-10-11 12:02:02.259756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.372 [2024-10-11 12:02:02.259761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.259992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.259997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.373 [2024-10-11 12:02:02.260121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.373 [2024-10-11 12:02:02.260245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.373 [2024-10-11 12:02:02.260250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.374 [2024-10-11 12:02:02.260681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.374 [2024-10-11 12:02:02.260687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.375 [2024-10-11 12:02:02.260971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.260989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.260994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119920 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.260999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119928 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119936 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119944 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119952 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119960 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119280 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119288 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119296 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.375 [2024-10-11 12:02:02.261166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119304 len:8 PRP1 0x0 PRP2 0x0 00:25:06.375 [2024-10-11 12:02:02.261174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.375 [2024-10-11 12:02:02.261179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.375 [2024-10-11 12:02:02.261183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.376 [2024-10-11 12:02:02.261187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119312 len:8 PRP1 0x0 PRP2 0x0 00:25:06.376 [2024-10-11 12:02:02.261192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.376 [2024-10-11 12:02:02.261197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.376 [2024-10-11 12:02:02.261201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.376 [2024-10-11 12:02:02.261205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119320 len:8 PRP1 0x0 PRP2 0x0 00:25:06.376 [2024-10-11 12:02:02.261210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.376 [2024-10-11 12:02:02.261216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.376 [2024-10-11 12:02:02.261220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.376 [2024-10-11 12:02:02.274418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119328 len:8 PRP1 0x0 PRP2 0x0 00:25:06.376 [2024-10-11 12:02:02.274443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.376 [2024-10-11 12:02:02.274489] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x98f090 was disconnected and freed. reset controller. 00:25:06.376 [2024-10-11 12:02:02.274497] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:06.376 [2024-10-11 12:02:02.274523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.376 [2024-10-11 12:02:02.274530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.376 [2024-10-11 12:02:02.274539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.376 [2024-10-11 12:02:02.274544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.376 [2024-10-11 12:02:02.274551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.376 [2024-10-11 12:02:02.274557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.376 [2024-10-11 12:02:02.274563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.376 [2024-10-11 12:02:02.274569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.376 [2024-10-11 12:02:02.274575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.376 [2024-10-11 12:02:02.274609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c280 (9): Bad file descriptor 00:25:06.376 [2024-10-11 12:02:02.277340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:06.376 [2024-10-11 12:02:02.343901] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:06.376 12065.44 IOPS, 47.13 MiB/s [2024-10-11T10:02:09.079Z] 12132.80 IOPS, 47.39 MiB/s [2024-10-11T10:02:09.079Z] 12186.27 IOPS, 47.60 MiB/s [2024-10-11T10:02:09.079Z] 12245.67 IOPS, 47.83 MiB/s [2024-10-11T10:02:09.079Z] 12288.54 IOPS, 48.00 MiB/s [2024-10-11T10:02:09.079Z] 12335.21 IOPS, 48.18 MiB/s [2024-10-11T10:02:09.079Z] 12374.40 IOPS, 48.34 MiB/s 00:25:06.376 Latency(us) 00:25:06.376 [2024-10-11T10:02:09.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.376 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:06.376 Verification LBA range: start 0x0 length 0x4000 00:25:06.376 NVMe0n1 : 15.01 12375.83 48.34 545.09 0.00 9885.33 549.55 20097.71 00:25:06.376 [2024-10-11T10:02:09.079Z] =================================================================================================================== 00:25:06.376 [2024-10-11T10:02:09.079Z] Total : 12375.83 48.34 545.09 0.00 9885.33 549.55 20097.71 00:25:06.376 Received shutdown signal, test time was about 15.000000 seconds 00:25:06.376 00:25:06.376 Latency(us) 00:25:06.376 [2024-10-11T10:02:09.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.376 [2024-10-11T10:02:09.079Z] =================================================================================================================== 00:25:06.376 [2024-10-11T10:02:09.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2041419 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2041419 /var/tmp/bdevperf.sock 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2041419 ']' 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.376 12:02:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:06.947 12:02:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.947 12:02:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:06.947 12:02:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:06.947 [2024-10-11 12:02:09.592947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:06.947 12:02:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:07.208 [2024-10-11 12:02:09.769357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:07.208 12:02:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:07.468 NVMe0n1 00:25:07.468 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:07.729 00:25:07.729 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:07.990 00:25:07.990 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.990 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:08.250 12:02:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.511 12:02:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:11.811 12:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.811 12:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:11.811 12:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2042636 00:25:11.811 12:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.811 12:02:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2042636 00:25:12.755 { 00:25:12.755 "results": [ 00:25:12.755 { 00:25:12.755 "job": "NVMe0n1", 00:25:12.755 "core_mask": "0x1", 00:25:12.755 "workload": "verify", 00:25:12.755 "status": "finished", 00:25:12.755 "verify_range": { 00:25:12.755 "start": 0, 00:25:12.755 "length": 16384 00:25:12.755 }, 00:25:12.755 "queue_depth": 128, 00:25:12.755 "io_size": 4096, 00:25:12.755 "runtime": 1.012181, 00:25:12.755 "iops": 12834.660994426886, 00:25:12.755 "mibps": 50.13539450948002, 00:25:12.755 "io_failed": 0, 00:25:12.755 "io_timeout": 0, 00:25:12.755 "avg_latency_us": 9937.444298360404, 00:25:12.755 "min_latency_us": 1788.5866666666666, 00:25:12.755 "max_latency_us": 8519.68 00:25:12.755 } 00:25:12.755 ], 00:25:12.755 "core_count": 1 00:25:12.755 } 00:25:12.755 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:12.755 [2024-10-11 12:02:08.643250] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:12.755 [2024-10-11 12:02:08.643311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041419 ] 00:25:12.755 [2024-10-11 12:02:08.722197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.755 [2024-10-11 12:02:08.751253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.755 [2024-10-11 12:02:10.985720] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:12.755 [2024-10-11 12:02:10.985758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.755 [2024-10-11 12:02:10.985767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.755 [2024-10-11 12:02:10.985774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.755 [2024-10-11 12:02:10.985780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.755 [2024-10-11 12:02:10.985786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.755 [2024-10-11 12:02:10.985791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.755 [2024-10-11 12:02:10.985797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.755 [2024-10-11 12:02:10.985802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.755 [2024-10-11 12:02:10.985807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.755 [2024-10-11 12:02:10.985830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.755 [2024-10-11 12:02:10.985841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1355280 (9): Bad file descriptor 00:25:12.755 [2024-10-11 12:02:10.995576] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:12.755 Running I/O for 1 seconds... 00:25:12.755 12759.00 IOPS, 49.84 MiB/s 00:25:12.755 Latency(us) 00:25:12.755 [2024-10-11T10:02:15.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.755 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:12.755 Verification LBA range: start 0x0 length 0x4000 00:25:12.755 NVMe0n1 : 1.01 12834.66 50.14 0.00 0.00 9937.44 1788.59 8519.68 00:25:12.755 [2024-10-11T10:02:15.458Z] =================================================================================================================== 00:25:12.755 [2024-10-11T10:02:15.458Z] Total : 12834.66 50.14 0.00 0.00 9937.44 1788.59 8519.68 00:25:12.755 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.755 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:13.016 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.016 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.016 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:13.276 12:02:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.536 12:02:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2041419 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2041419 ']' 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2041419 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2041419 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:16.834 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:16.835 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2041419' 00:25:16.835 killing process with pid 2041419 00:25:16.835 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2041419 00:25:16.835 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2041419 00:25:16.835 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:16.835 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.095 rmmod nvme_tcp 00:25:17.095 rmmod nvme_fabrics 00:25:17.095 rmmod nvme_keyring 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 2037699 ']' 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 2037699 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2037699 ']' 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2037699 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2037699 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2037699' 00:25:17.095 killing process with pid 2037699 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2037699 00:25:17.095 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2037699 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.357 12:02:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.336 12:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:19.336 00:25:19.336 real 0m40.553s 00:25:19.336 user 2m3.890s 00:25:19.336 sys 0m9.071s 00:25:19.336 12:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.336 12:02:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:19.336 ************************************ 00:25:19.336 END TEST nvmf_failover 00:25:19.336 ************************************ 00:25:19.336 12:02:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.336 12:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:19.336 12:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:19.336 12:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.336 ************************************ 00:25:19.336 START TEST nvmf_host_discovery 00:25:19.336 ************************************ 00:25:19.336 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.603 * Looking for test storage... 00:25:19.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:19.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.603 --rc genhtml_branch_coverage=1 00:25:19.603 --rc genhtml_function_coverage=1 00:25:19.603 --rc genhtml_legend=1 00:25:19.603 --rc geninfo_all_blocks=1 00:25:19.603 --rc geninfo_unexecuted_blocks=1 00:25:19.603 00:25:19.603 ' 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:19.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.603 --rc genhtml_branch_coverage=1 00:25:19.603 --rc genhtml_function_coverage=1 00:25:19.603 --rc genhtml_legend=1 00:25:19.603 --rc geninfo_all_blocks=1 00:25:19.603 --rc geninfo_unexecuted_blocks=1 00:25:19.603 00:25:19.603 ' 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:19.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.603 --rc genhtml_branch_coverage=1 00:25:19.603 --rc genhtml_function_coverage=1 00:25:19.603 --rc genhtml_legend=1 00:25:19.603 --rc geninfo_all_blocks=1 00:25:19.603 --rc geninfo_unexecuted_blocks=1 00:25:19.603 00:25:19.603 ' 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:19.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.603 --rc genhtml_branch_coverage=1 00:25:19.603 --rc genhtml_function_coverage=1 00:25:19.603 --rc genhtml_legend=1 00:25:19.603 --rc geninfo_all_blocks=1 00:25:19.603 --rc geninfo_unexecuted_blocks=1 00:25:19.603 00:25:19.603 ' 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.603 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:19.604 12:02:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:27.739 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:27.739 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:27.739 Found net devices under 0000:31:00.0: cvl_0_0 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:27.739 Found net devices under 0000:31:00.1: cvl_0_1 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.739 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:25:27.740 00:25:27.740 --- 10.0.0.2 ping statistics --- 00:25:27.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.740 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:25:27.740 00:25:27.740 --- 10.0.0.1 ping statistics --- 00:25:27.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.740 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=2047933 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 2047933 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2047933 ']' 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 [2024-10-11 12:02:29.793930] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:27.740 [2024-10-11 12:02:29.793984] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.740 [2024-10-11 12:02:29.855469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.740 [2024-10-11 12:02:29.884457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.740 [2024-10-11 12:02:29.884487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.740 [2024-10-11 12:02:29.884494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.740 [2024-10-11 12:02:29.884498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.740 [2024-10-11 12:02:29.884502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.740 [2024-10-11 12:02:29.885001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:27.740 12:02:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 [2024-10-11 12:02:30.018761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 [2024-10-11 12:02:30.030923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 null0 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 null1 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2048109 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2048109 /tmp/host.sock 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2048109 ']' 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:27.740 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:27.740 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.740 [2024-10-11 12:02:30.122257] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:27.740 [2024-10-11 12:02:30.122310] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048109 ] 00:25:27.740 [2024-10-11 12:02:30.200294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.740 [2024-10-11 12:02:30.236812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.311 12:02:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.311 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.571 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.572 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 [2024-10-11 12:02:31.282046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:28.833 12:02:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:29.404 [2024-10-11 12:02:31.998239] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:29.404 [2024-10-11 12:02:31.998262] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:29.404 [2024-10-11 12:02:31.998277] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:29.404 [2024-10-11 12:02:32.085552] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:29.664 [2024-10-11 12:02:32.311371] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:29.664 [2024-10-11 12:02:32.311400] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:29.924 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:29.925 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.186 [2024-10-11 12:02:32.813958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:30.186 [2024-10-11 12:02:32.814088] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:30.186 [2024-10-11 12:02:32.814118] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:30.186 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:30.187 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.187 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.187 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.187 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.187 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.187 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:30.447 [2024-10-11 12:02:32.942928] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:30.447 12:02:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:30.447 [2024-10-11 12:02:33.001755] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:30.447 [2024-10-11 12:02:33.001777] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:30.447 [2024-10-11 12:02:33.001783] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.387 12:02:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.387 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.387 [2024-10-11 12:02:34.069830] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:31.387 [2024-10-11 12:02:34.069854] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.387 [2024-10-11 12:02:34.073181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.388 [2024-10-11 12:02:34.073200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.388 [2024-10-11 12:02:34.073210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.388 [2024-10-11 12:02:34.073223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.388 [2024-10-11 12:02:34.073231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.388 [2024-10-11 12:02:34.073238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.388 [2024-10-11 12:02:34.073246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.388 [2024-10-11 12:02:34.073254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.388 [2024-10-11 12:02:34.073261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905290 is same with the state(6) to be set 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.388 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.388 [2024-10-11 12:02:34.083194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1905290 (9): Bad file descriptor 00:25:31.649 [2024-10-11 12:02:34.093234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.649 [2024-10-11 12:02:34.093572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.649 [2024-10-11 12:02:34.093587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905290 with addr=10.0.0.2, port=4420 00:25:31.649 [2024-10-11 12:02:34.093596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905290 is same with the state(6) to be set 00:25:31.649 [2024-10-11 12:02:34.093608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1905290 (9): Bad file descriptor 00:25:31.649 [2024-10-11 12:02:34.093620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.649 [2024-10-11 12:02:34.093627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.649 [2024-10-11 12:02:34.093635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.649 [2024-10-11 12:02:34.093647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.649 [2024-10-11 12:02:34.103294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.649 [2024-10-11 12:02:34.103592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.649 [2024-10-11 12:02:34.103608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905290 with addr=10.0.0.2, port=4420 00:25:31.649 [2024-10-11 12:02:34.103615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905290 is same with the state(6) to be set 00:25:31.649 [2024-10-11 12:02:34.103627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1905290 (9): Bad file descriptor 00:25:31.649 [2024-10-11 12:02:34.103637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.649 [2024-10-11 12:02:34.103643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.649 [2024-10-11 12:02:34.103651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.649 [2024-10-11 12:02:34.103661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.649 [2024-10-11 12:02:34.113344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.649 [2024-10-11 12:02:34.113672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.649 [2024-10-11 12:02:34.113683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905290 with addr=10.0.0.2, port=4420 00:25:31.649 [2024-10-11 12:02:34.113690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905290 is same with the state(6) to be set 00:25:31.649 [2024-10-11 12:02:34.113701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1905290 (9): Bad file descriptor 00:25:31.649 [2024-10-11 12:02:34.113711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.649 [2024-10-11 12:02:34.113718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.649 [2024-10-11 12:02:34.113725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.649 [2024-10-11 12:02:34.113736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.649 [2024-10-11 12:02:34.123395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.649 [2024-10-11 12:02:34.123619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.649 [2024-10-11 12:02:34.123633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905290 with addr=10.0.0.2, port=4420 00:25:31.649 [2024-10-11 12:02:34.123640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905290 is same with the state(6) to be set 00:25:31.649 [2024-10-11 12:02:34.123651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1905290 (9): Bad file descriptor 00:25:31.649 [2024-10-11 12:02:34.123662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.649 [2024-10-11 12:02:34.123668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.649 [2024-10-11 12:02:34.123676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.649 [2024-10-11 12:02:34.123687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.649 [2024-10-11 12:02:34.133452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.649 [2024-10-11 12:02:34.133751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.649 [2024-10-11 12:02:34.133766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905290 with addr=10.0.0.2, port=4420 00:25:31.649 [2024-10-11 12:02:34.133777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905290 is same with the state(6) to be set 00:25:31.649 [2024-10-11 12:02:34.133794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1905290 (9): Bad file descriptor 00:25:31.649 [2024-10-11 12:02:34.133809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.649 [2024-10-11 12:02:34.133820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.649 [2024-10-11 12:02:34.133830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.649 [2024-10-11 12:02:34.133843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.649 [2024-10-11 12:02:34.143506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.649 [2024-10-11 12:02:34.143717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.649 [2024-10-11 12:02:34.143730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905290 with addr=10.0.0.2, port=4420 00:25:31.649 [2024-10-11 12:02:34.143737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905290 is same with the state(6) to be set 00:25:31.649 [2024-10-11 12:02:34.143748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1905290 (9): Bad file descriptor 00:25:31.649 [2024-10-11 12:02:34.143766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.649 [2024-10-11 12:02:34.143774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.649 [2024-10-11 12:02:34.143781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.649 [2024-10-11 12:02:34.143791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.649 [2024-10-11 12:02:34.153560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:31.649 [2024-10-11 12:02:34.153859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.649 [2024-10-11 12:02:34.153871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905290 with addr=10.0.0.2, port=4420 00:25:31.649 [2024-10-11 12:02:34.153878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905290 is same with the state(6) to be set 00:25:31.649 [2024-10-11 12:02:34.153889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1905290 (9): Bad file descriptor 00:25:31.649 [2024-10-11 12:02:34.153906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.649 [2024-10-11 12:02:34.153917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:31.649 [2024-10-11 12:02:34.153924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.649 [2024-10-11 12:02:34.153935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.649 [2024-10-11 12:02:34.158924] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:31.649 [2024-10-11 12:02:34.158942] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:31.649 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:31.650 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.910 12:02:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.851 [2024-10-11 12:02:35.466412] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:32.851 [2024-10-11 12:02:35.466426] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:32.851 [2024-10-11 12:02:35.466435] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:32.851 [2024-10-11 12:02:35.554693] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:33.111 [2024-10-11 12:02:35.661333] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:33.111 [2024-10-11 12:02:35.661357] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.111 request: 00:25:33.111 { 00:25:33.111 "name": "nvme", 00:25:33.111 "trtype": "tcp", 00:25:33.111 "traddr": "10.0.0.2", 00:25:33.111 "adrfam": "ipv4", 00:25:33.111 "trsvcid": "8009", 00:25:33.111 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:33.111 "wait_for_attach": true, 00:25:33.111 "method": "bdev_nvme_start_discovery", 00:25:33.111 "req_id": 1 00:25:33.111 } 00:25:33.111 Got JSON-RPC error response 00:25:33.111 response: 00:25:33.111 { 00:25:33.111 "code": -17, 00:25:33.111 "message": "File exists" 00:25:33.111 } 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.111 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.112 request: 00:25:33.112 { 00:25:33.112 "name": "nvme_second", 00:25:33.112 "trtype": "tcp", 00:25:33.112 "traddr": "10.0.0.2", 00:25:33.112 "adrfam": "ipv4", 00:25:33.112 "trsvcid": "8009", 00:25:33.112 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:33.112 "wait_for_attach": true, 00:25:33.112 "method": "bdev_nvme_start_discovery", 00:25:33.112 "req_id": 1 00:25:33.112 } 00:25:33.112 Got JSON-RPC error response 00:25:33.112 response: 00:25:33.112 { 00:25:33.112 "code": -17, 00:25:33.112 "message": "File exists" 00:25:33.112 } 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.112 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.376 12:02:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.318 [2024-10-11 12:02:36.921285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.318 [2024-10-11 12:02:36.921308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1904f90 with addr=10.0.0.2, port=8010 00:25:34.318 [2024-10-11 12:02:36.921317] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:34.318 [2024-10-11 12:02:36.921322] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:34.318 [2024-10-11 12:02:36.921328] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:35.259 [2024-10-11 12:02:37.923634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.259 [2024-10-11 12:02:37.923653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1904f90 with addr=10.0.0.2, port=8010 00:25:35.259 [2024-10-11 12:02:37.923661] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:35.259 [2024-10-11 12:02:37.923666] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:35.259 [2024-10-11 12:02:37.923671] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:36.645 [2024-10-11 12:02:38.925640] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:36.645 request: 00:25:36.645 { 00:25:36.645 "name": "nvme_second", 00:25:36.645 "trtype": "tcp", 00:25:36.645 "traddr": "10.0.0.2", 00:25:36.645 "adrfam": "ipv4", 00:25:36.645 "trsvcid": "8010", 00:25:36.645 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:36.645 "wait_for_attach": false, 00:25:36.645 "attach_timeout_ms": 3000, 00:25:36.645 "method": "bdev_nvme_start_discovery", 00:25:36.645 "req_id": 1 00:25:36.645 } 00:25:36.645 Got JSON-RPC error response 00:25:36.645 response: 00:25:36.645 { 00:25:36.645 "code": -110, 00:25:36.645 "message": "Connection timed out" 00:25:36.645 } 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.645 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2048109 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:36.646 12:02:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:36.646 rmmod nvme_tcp 00:25:36.646 rmmod nvme_fabrics 00:25:36.646 rmmod nvme_keyring 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 2047933 ']' 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 2047933 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2047933 ']' 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2047933 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2047933 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2047933' 00:25:36.646 killing process with pid 2047933 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2047933 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2047933 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.646 12:02:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:39.192 00:25:39.192 real 0m19.266s 00:25:39.192 user 0m22.165s 00:25:39.192 sys 0m6.994s 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.192 ************************************ 00:25:39.192 END TEST nvmf_host_discovery 00:25:39.192 ************************************ 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.192 ************************************ 00:25:39.192 START TEST nvmf_host_multipath_status 00:25:39.192 ************************************ 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:39.192 * Looking for test storage... 00:25:39.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:39.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.192 --rc genhtml_branch_coverage=1 00:25:39.192 --rc genhtml_function_coverage=1 00:25:39.192 --rc genhtml_legend=1 00:25:39.192 --rc geninfo_all_blocks=1 00:25:39.192 --rc geninfo_unexecuted_blocks=1 00:25:39.192 00:25:39.192 ' 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:39.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.192 --rc genhtml_branch_coverage=1 00:25:39.192 --rc genhtml_function_coverage=1 00:25:39.192 --rc genhtml_legend=1 00:25:39.192 --rc geninfo_all_blocks=1 00:25:39.192 --rc geninfo_unexecuted_blocks=1 00:25:39.192 00:25:39.192 ' 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:39.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.192 --rc genhtml_branch_coverage=1 00:25:39.192 --rc genhtml_function_coverage=1 00:25:39.192 --rc genhtml_legend=1 00:25:39.192 --rc geninfo_all_blocks=1 00:25:39.192 --rc geninfo_unexecuted_blocks=1 00:25:39.192 00:25:39.192 ' 00:25:39.192 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:39.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.192 --rc genhtml_branch_coverage=1 00:25:39.192 --rc genhtml_function_coverage=1 00:25:39.192 --rc genhtml_legend=1 00:25:39.192 --rc geninfo_all_blocks=1 00:25:39.193 --rc geninfo_unexecuted_blocks=1 00:25:39.193 00:25:39.193 ' 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.193 12:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.337 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:47.338 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:47.338 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:47.338 Found net devices under 0000:31:00.0: cvl_0_0 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:47.338 Found net devices under 0000:31:00.1: cvl_0_1 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.338 12:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:47.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:25:47.338 00:25:47.338 --- 10.0.0.2 ping statistics --- 00:25:47.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.338 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:25:47.338 00:25:47.338 --- 10.0.0.1 ping statistics --- 00:25:47.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.338 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=2054232 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 2054232 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2054232 ']' 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.338 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.339 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.339 12:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.339 [2024-10-11 12:02:49.368494] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:47.339 [2024-10-11 12:02:49.368558] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.339 [2024-10-11 12:02:49.460680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:47.339 [2024-10-11 12:02:49.512541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.339 [2024-10-11 12:02:49.512594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.339 [2024-10-11 12:02:49.512602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.339 [2024-10-11 12:02:49.512610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.339 [2024-10-11 12:02:49.512617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.339 [2024-10-11 12:02:49.514430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.339 [2024-10-11 12:02:49.514433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.601 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.601 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:47.601 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:47.601 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.601 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.601 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.601 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2054232 00:25:47.601 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:47.862 [2024-10-11 12:02:50.413926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.862 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:48.123 Malloc0 00:25:48.123 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:48.385 12:02:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:48.385 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.646 [2024-10-11 12:02:51.230233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.646 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:48.907 [2024-10-11 12:02:51.434750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2054740 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2054740 /var/tmp/bdevperf.sock 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2054740 ']' 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.907 12:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:49.849 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.849 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:49.849 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:49.849 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:50.420 Nvme0n1 00:25:50.420 12:02:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:50.680 Nvme0n1 00:25:50.680 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:50.680 12:02:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:53.221 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:53.221 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:53.221 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:53.221 12:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:54.160 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:54.160 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:54.160 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.160 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.160 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.160 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:54.160 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.160 12:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.421 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.421 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.421 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.421 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.681 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.681 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.681 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.681 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.941 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.941 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.941 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.941 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.941 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.941 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:54.941 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.941 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.202 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.202 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:55.202 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:55.462 12:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.462 12:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:56.844 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:56.844 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:56.844 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.844 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.844 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.844 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.844 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.845 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.845 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.845 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.845 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.845 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.105 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.105 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.105 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.105 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.365 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.365 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.366 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.366 12:02:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.366 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.366 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.366 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.366 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.626 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.626 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:57.626 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.886 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:57.886 12:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:59.271 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:59.271 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.271 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.271 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.271 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.271 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:59.271 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.272 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.272 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.272 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.272 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.272 12:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.538 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.538 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.538 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.538 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.801 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.801 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.801 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.801 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.801 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.801 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.801 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.801 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.061 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.061 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:00.061 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:00.321 12:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:00.581 12:03:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:01.521 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:01.521 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:01.521 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.521 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.521 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.521 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:01.521 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.521 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.781 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.781 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.781 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.781 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.042 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.042 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.042 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.042 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.302 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.302 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.302 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.302 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.302 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.302 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:02.302 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.302 12:03:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.562 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.562 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:02.562 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:02.822 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:02.822 12:03:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:03.760 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:03.760 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:03.760 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.760 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.019 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.019 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:04.019 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.020 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.280 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.280 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.280 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.280 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:04.280 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.280 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:04.280 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.280 12:03:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.539 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.539 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:04.539 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.539 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.799 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.799 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:04.799 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.799 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.799 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.799 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:04.799 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:05.060 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:05.319 12:03:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:06.259 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:06.259 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:06.259 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.259 12:03:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.519 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.519 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:06.519 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.519 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.780 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.780 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.780 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.780 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.780 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.780 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.780 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.780 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.040 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.040 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:07.040 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.040 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.301 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.301 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:07.301 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.301 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.301 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.301 12:03:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:07.561 12:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:07.561 12:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:07.821 12:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.821 12:03:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.246 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.544 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.544 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.544 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.544 12:03:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:09.544 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.544 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:09.544 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.544 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.821 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.821 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.821 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.821 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.082 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.082 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:10.082 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:10.082 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:10.343 12:03:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:11.282 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:11.282 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:11.282 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.282 12:03:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:11.543 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.543 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:11.543 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.543 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.803 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.803 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.803 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.803 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.803 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.803 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.803 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.803 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:12.064 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.064 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:12.064 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.064 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.325 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.325 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:12.325 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.325 12:03:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.325 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.325 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:12.325 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.585 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:12.845 12:03:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:13.792 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:13.792 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:13.792 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.792 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.052 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.052 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.052 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.052 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.313 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.313 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.313 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.313 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.313 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.313 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.313 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.313 12:03:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.574 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.574 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.574 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.574 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.835 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.835 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.835 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.835 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.835 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.835 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:14.835 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.095 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:15.358 12:03:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:16.298 12:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:16.298 12:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:16.298 12:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.298 12:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.298 12:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.298 12:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:16.298 12:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.298 12:03:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.557 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.557 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.557 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.557 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.817 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.817 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.817 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.817 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.078 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.078 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:17.078 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.078 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.078 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.078 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:17.078 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.078 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2054740 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2054740 ']' 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2054740 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2054740 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2054740' 00:26:17.338 killing process with pid 2054740 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2054740 00:26:17.338 12:03:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2054740 00:26:17.338 { 00:26:17.338 "results": [ 00:26:17.338 { 00:26:17.338 "job": "Nvme0n1", 00:26:17.338 "core_mask": "0x4", 00:26:17.338 "workload": "verify", 00:26:17.338 "status": "terminated", 00:26:17.338 "verify_range": { 00:26:17.338 "start": 0, 00:26:17.338 "length": 16384 00:26:17.338 }, 00:26:17.338 "queue_depth": 128, 00:26:17.338 "io_size": 4096, 00:26:17.338 "runtime": 26.542189, 00:26:17.338 "iops": 12053.188227994307, 00:26:17.338 "mibps": 47.08276651560276, 00:26:17.338 "io_failed": 0, 00:26:17.338 "io_timeout": 0, 00:26:17.338 "avg_latency_us": 10599.69261923368, 00:26:17.338 "min_latency_us": 873.8133333333334, 00:26:17.338 "max_latency_us": 3019898.88 00:26:17.338 } 00:26:17.338 ], 00:26:17.338 "core_count": 1 00:26:17.338 } 00:26:17.601 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2054740 00:26:17.601 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:17.601 [2024-10-11 12:02:51.524049] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:26:17.601 [2024-10-11 12:02:51.524152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054740 ] 00:26:17.601 [2024-10-11 12:02:51.607472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.601 [2024-10-11 12:02:51.657934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.601 Running I/O for 90 seconds... 00:26:17.601 10946.00 IOPS, 42.76 MiB/s [2024-10-11T10:03:20.304Z] 10987.00 IOPS, 42.92 MiB/s [2024-10-11T10:03:20.304Z] 11004.00 IOPS, 42.98 MiB/s [2024-10-11T10:03:20.304Z] 11336.50 IOPS, 44.28 MiB/s [2024-10-11T10:03:20.304Z] 11658.20 IOPS, 45.54 MiB/s [2024-10-11T10:03:20.304Z] 11879.50 IOPS, 46.40 MiB/s [2024-10-11T10:03:20.304Z] 12026.57 IOPS, 46.98 MiB/s [2024-10-11T10:03:20.304Z] 12148.00 IOPS, 47.45 MiB/s [2024-10-11T10:03:20.304Z] 12227.11 IOPS, 47.76 MiB/s [2024-10-11T10:03:20.304Z] 12286.80 IOPS, 48.00 MiB/s [2024-10-11T10:03:20.304Z] 12349.45 IOPS, 48.24 MiB/s [2024-10-11T10:03:20.304Z] [2024-10-11 12:03:05.281901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.281934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.281969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-11 12:03:05.281975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.281986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-11 12:03:05.281992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-11 12:03:05.282008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-11 12:03:05.282024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.601 [2024-10-11 12:03:05.282040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.601 [2024-10-11 12:03:05.282367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:17.601 [2024-10-11 12:03:05.282377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.602 [2024-10-11 12:03:05.282556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.602 [2024-10-11 12:03:05.282571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.602 [2024-10-11 12:03:05.282587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.602 [2024-10-11 12:03:05.282602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.602 [2024-10-11 12:03:05.282618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.602 [2024-10-11 12:03:05.282634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.602 [2024-10-11 12:03:05.282770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.282983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.282996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:17.602 [2024-10-11 12:03:05.283177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.602 [2024-10-11 12:03:05.283182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.603 [2024-10-11 12:03:05.283851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.603 [2024-10-11 12:03:05.283870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.603 [2024-10-11 12:03:05.283891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.603 [2024-10-11 12:03:05.283911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.603 [2024-10-11 12:03:05.283931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.603 [2024-10-11 12:03:05.283951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.603 [2024-10-11 12:03:05.283972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.283986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.283992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.284006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.284011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.284026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.284031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.284047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.284052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.284108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.284115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.284133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.603 [2024-10-11 12:03:05.284138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.603 [2024-10-11 12:03:05.284154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:05.284160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:05.284180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:05.284201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:05.284535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:05.284550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:05.284556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:17.604 12258.25 IOPS, 47.88 MiB/s [2024-10-11T10:03:20.307Z] 11315.31 IOPS, 44.20 MiB/s [2024-10-11T10:03:20.307Z] 10507.07 IOPS, 41.04 MiB/s [2024-10-11T10:03:20.307Z] 9916.47 IOPS, 38.74 MiB/s [2024-10-11T10:03:20.307Z] 10106.62 IOPS, 39.48 MiB/s [2024-10-11T10:03:20.307Z] 10309.41 IOPS, 40.27 MiB/s [2024-10-11T10:03:20.307Z] 10709.67 IOPS, 41.83 MiB/s [2024-10-11T10:03:20.307Z] 11071.42 IOPS, 43.25 MiB/s [2024-10-11T10:03:20.307Z] 11269.25 IOPS, 44.02 MiB/s [2024-10-11T10:03:20.307Z] 11349.24 IOPS, 44.33 MiB/s [2024-10-11T10:03:20.307Z] 11426.73 IOPS, 44.64 MiB/s [2024-10-11T10:03:20.307Z] 11682.43 IOPS, 45.63 MiB/s [2024-10-11T10:03:20.307Z] 11921.96 IOPS, 46.57 MiB/s [2024-10-11T10:03:20.307Z] [2024-10-11 12:03:17.778002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.778037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.779174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:17.779189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.779201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:17.779211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.779222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.604 [2024-10-11 12:03:17.779227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:17.604 [2024-10-11 12:03:17.781873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.604 [2024-10-11 12:03:17.781878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:17.605 [2024-10-11 12:03:17.781888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.605 [2024-10-11 12:03:17.781893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:17.605 12014.84 IOPS, 46.93 MiB/s [2024-10-11T10:03:20.308Z] 12040.96 IOPS, 47.04 MiB/s [2024-10-11T10:03:20.308Z] Received shutdown signal, test time was about 26.542800 seconds 00:26:17.605 00:26:17.605 Latency(us) 00:26:17.605 [2024-10-11T10:03:20.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.605 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:17.605 Verification LBA range: start 0x0 length 0x4000 00:26:17.605 Nvme0n1 : 26.54 12053.19 47.08 0.00 0.00 10599.69 873.81 3019898.88 00:26:17.605 [2024-10-11T10:03:20.308Z] =================================================================================================================== 00:26:17.605 [2024-10-11T10:03:20.308Z] Total : 12053.19 47.08 0.00 0.00 10599.69 873.81 3019898.88 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.605 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.605 rmmod nvme_tcp 00:26:17.865 rmmod nvme_fabrics 00:26:17.865 rmmod nvme_keyring 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 2054232 ']' 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 2054232 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2054232 ']' 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2054232 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2054232 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2054232' 00:26:17.865 killing process with pid 2054232 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2054232 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2054232 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.865 12:03:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.409 00:26:20.409 real 0m41.245s 00:26:20.409 user 1m45.885s 00:26:20.409 sys 0m11.669s 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:20.409 ************************************ 00:26:20.409 END TEST nvmf_host_multipath_status 00:26:20.409 ************************************ 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.409 ************************************ 00:26:20.409 START TEST nvmf_discovery_remove_ifc 00:26:20.409 ************************************ 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:20.409 * Looking for test storage... 00:26:20.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:20.409 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:20.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.410 --rc genhtml_branch_coverage=1 00:26:20.410 --rc genhtml_function_coverage=1 00:26:20.410 --rc genhtml_legend=1 00:26:20.410 --rc geninfo_all_blocks=1 00:26:20.410 --rc geninfo_unexecuted_blocks=1 00:26:20.410 00:26:20.410 ' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:20.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.410 --rc genhtml_branch_coverage=1 00:26:20.410 --rc genhtml_function_coverage=1 00:26:20.410 --rc genhtml_legend=1 00:26:20.410 --rc geninfo_all_blocks=1 00:26:20.410 --rc geninfo_unexecuted_blocks=1 00:26:20.410 00:26:20.410 ' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:20.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.410 --rc genhtml_branch_coverage=1 00:26:20.410 --rc genhtml_function_coverage=1 00:26:20.410 --rc genhtml_legend=1 00:26:20.410 --rc geninfo_all_blocks=1 00:26:20.410 --rc geninfo_unexecuted_blocks=1 00:26:20.410 00:26:20.410 ' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:20.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.410 --rc genhtml_branch_coverage=1 00:26:20.410 --rc genhtml_function_coverage=1 00:26:20.410 --rc genhtml_legend=1 00:26:20.410 --rc geninfo_all_blocks=1 00:26:20.410 --rc geninfo_unexecuted_blocks=1 00:26:20.410 00:26:20.410 ' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:20.410 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.411 12:03:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.554 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:28.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:28.555 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:28.555 Found net devices under 0000:31:00.0: cvl_0_0 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:28.555 Found net devices under 0000:31:00.1: cvl_0_1 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:26:28.555 00:26:28.555 --- 10.0.0.2 ping statistics --- 00:26:28.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.555 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:26:28.555 00:26:28.555 --- 10.0.0.1 ping statistics --- 00:26:28.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.555 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=2065247 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 2065247 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2065247 ']' 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.555 12:03:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.555 [2024-10-11 12:03:30.708482] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:26:28.555 [2024-10-11 12:03:30.708545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.555 [2024-10-11 12:03:30.797353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.556 [2024-10-11 12:03:30.847551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.556 [2024-10-11 12:03:30.847601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.556 [2024-10-11 12:03:30.847609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.556 [2024-10-11 12:03:30.847617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.556 [2024-10-11 12:03:30.847623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.556 [2024-10-11 12:03:30.848408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.818 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.818 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:28.818 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:28.818 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.818 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.079 [2024-10-11 12:03:31.577354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.079 [2024-10-11 12:03:31.585588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:29.079 null0 00:26:29.079 [2024-10-11 12:03:31.617565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2065497 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2065497 /tmp/host.sock 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2065497 ']' 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:29.079 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:29.079 12:03:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.079 [2024-10-11 12:03:31.694042] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:26:29.079 [2024-10-11 12:03:31.694112] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065497 ] 00:26:29.079 [2024-10-11 12:03:31.778485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.341 [2024-10-11 12:03:31.832356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.913 12:03:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.296 [2024-10-11 12:03:33.625454] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:31.296 [2024-10-11 12:03:33.625478] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:31.296 [2024-10-11 12:03:33.625493] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:31.296 [2024-10-11 12:03:33.751890] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:31.296 [2024-10-11 12:03:33.971983] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:31.296 [2024-10-11 12:03:33.972033] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:31.296 [2024-10-11 12:03:33.972055] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:31.296 [2024-10-11 12:03:33.972077] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:31.296 [2024-10-11 12:03:33.972102] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:31.296 [2024-10-11 12:03:33.975429] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x13fe710 was disconnected and freed. delete nvme_qpair. 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.296 12:03:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.556 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.556 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:31.556 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.557 12:03:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.941 12:03:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.883 12:03:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.824 12:03:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.767 12:03:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.710 [2024-10-11 12:03:39.412640] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:36.710 [2024-10-11 12:03:39.412677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.710 [2024-10-11 12:03:39.412687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.710 [2024-10-11 12:03:39.412695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.710 [2024-10-11 12:03:39.412705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.710 [2024-10-11 12:03:39.412711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.710 [2024-10-11 12:03:39.412716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.710 [2024-10-11 12:03:39.412722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.710 [2024-10-11 12:03:39.412727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.710 [2024-10-11 12:03:39.412733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:36.710 [2024-10-11 12:03:39.412738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:36.710 [2024-10-11 12:03:39.412743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db180 is same with the state(6) to be set 00:26:36.970 [2024-10-11 12:03:39.422663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13db180 (9): Bad file descriptor 00:26:36.970 [2024-10-11 12:03:39.432698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:36.970 12:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.970 12:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.970 12:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.970 12:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.970 12:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.970 12:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.970 12:03:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.911 [2024-10-11 12:03:40.457211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:37.911 [2024-10-11 12:03:40.457314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13db180 with addr=10.0.0.2, port=4420 00:26:37.911 [2024-10-11 12:03:40.457348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db180 is same with the state(6) to be set 00:26:37.911 [2024-10-11 12:03:40.457416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13db180 (9): Bad file descriptor 00:26:37.911 [2024-10-11 12:03:40.458554] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:37.911 [2024-10-11 12:03:40.458627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:37.911 [2024-10-11 12:03:40.458650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:37.911 [2024-10-11 12:03:40.458673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:37.912 [2024-10-11 12:03:40.458739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.912 [2024-10-11 12:03:40.458764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:37.912 12:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.912 12:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.912 12:03:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.852 [2024-10-11 12:03:41.461161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:38.852 [2024-10-11 12:03:41.461182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:38.852 [2024-10-11 12:03:41.461187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:38.852 [2024-10-11 12:03:41.461193] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:38.852 [2024-10-11 12:03:41.461202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:38.852 [2024-10-11 12:03:41.461218] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:38.852 [2024-10-11 12:03:41.461238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.852 [2024-10-11 12:03:41.461246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.852 [2024-10-11 12:03:41.461254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.852 [2024-10-11 12:03:41.461260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.852 [2024-10-11 12:03:41.461265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.852 [2024-10-11 12:03:41.461270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.852 [2024-10-11 12:03:41.461276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.852 [2024-10-11 12:03:41.461281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.853 [2024-10-11 12:03:41.461287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.853 [2024-10-11 12:03:41.461292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.853 [2024-10-11 12:03:41.461297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:38.853 [2024-10-11 12:03:41.461733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ca8c0 (9): Bad file descriptor 00:26:38.853 [2024-10-11 12:03:41.462744] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:38.853 [2024-10-11 12:03:41.462753] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.853 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:39.113 12:03:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:40.055 12:03:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.995 [2024-10-11 12:03:43.520982] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:40.995 [2024-10-11 12:03:43.520997] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:40.995 [2024-10-11 12:03:43.521007] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.995 [2024-10-11 12:03:43.648379] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:41.256 12:03:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.256 [2024-10-11 12:03:43.832839] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:41.256 [2024-10-11 12:03:43.832870] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:41.256 [2024-10-11 12:03:43.832885] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:41.256 [2024-10-11 12:03:43.832895] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:41.256 [2024-10-11 12:03:43.832901] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:41.256 [2024-10-11 12:03:43.839609] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x13e55a0 was disconnected and freed. delete nvme_qpair. 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2065497 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2065497 ']' 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2065497 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.197 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2065497 00:26:42.458 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:42.458 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:42.458 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2065497' 00:26:42.458 killing process with pid 2065497 00:26:42.458 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2065497 00:26:42.458 12:03:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2065497 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.458 rmmod nvme_tcp 00:26:42.458 rmmod nvme_fabrics 00:26:42.458 rmmod nvme_keyring 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 2065247 ']' 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 2065247 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2065247 ']' 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2065247 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2065247 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2065247' 00:26:42.458 killing process with pid 2065247 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2065247 00:26:42.458 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2065247 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.719 12:03:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.629 12:03:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.889 00:26:44.889 real 0m24.628s 00:26:44.889 user 0m29.568s 00:26:44.889 sys 0m7.298s 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.889 ************************************ 00:26:44.889 END TEST nvmf_discovery_remove_ifc 00:26:44.889 ************************************ 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.889 ************************************ 00:26:44.889 START TEST nvmf_identify_kernel_target 00:26:44.889 ************************************ 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.889 * Looking for test storage... 00:26:44.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:44.889 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:45.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.150 --rc genhtml_branch_coverage=1 00:26:45.150 --rc genhtml_function_coverage=1 00:26:45.150 --rc genhtml_legend=1 00:26:45.150 --rc geninfo_all_blocks=1 00:26:45.150 --rc geninfo_unexecuted_blocks=1 00:26:45.150 00:26:45.150 ' 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:45.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.150 --rc genhtml_branch_coverage=1 00:26:45.150 --rc genhtml_function_coverage=1 00:26:45.150 --rc genhtml_legend=1 00:26:45.150 --rc geninfo_all_blocks=1 00:26:45.150 --rc geninfo_unexecuted_blocks=1 00:26:45.150 00:26:45.150 ' 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:45.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.150 --rc genhtml_branch_coverage=1 00:26:45.150 --rc genhtml_function_coverage=1 00:26:45.150 --rc genhtml_legend=1 00:26:45.150 --rc geninfo_all_blocks=1 00:26:45.150 --rc geninfo_unexecuted_blocks=1 00:26:45.150 00:26:45.150 ' 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:45.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.150 --rc genhtml_branch_coverage=1 00:26:45.150 --rc genhtml_function_coverage=1 00:26:45.150 --rc genhtml_legend=1 00:26:45.150 --rc geninfo_all_blocks=1 00:26:45.150 --rc geninfo_unexecuted_blocks=1 00:26:45.150 00:26:45.150 ' 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.150 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.151 12:03:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.289 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:53.290 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:53.290 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:53.290 Found net devices under 0000:31:00.0: cvl_0_0 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:53.290 Found net devices under 0000:31:00.1: cvl_0_1 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.290 12:03:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:26:53.290 00:26:53.290 --- 10.0.0.2 ping statistics --- 00:26:53.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.290 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:26:53.290 00:26:53.290 --- 10.0.0.1 ping statistics --- 00:26:53.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.290 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.290 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:53.291 12:03:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:56.590 Waiting for block devices as requested 00:26:56.590 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:56.590 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:56.590 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:56.590 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:56.590 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:56.851 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:56.851 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:56.851 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:57.111 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:57.111 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:57.372 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:57.372 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:57.372 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:57.633 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:57.633 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:57.633 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:57.894 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:58.155 No valid GPT data, bailing 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:26:58.155 00:26:58.155 Discovery Log Number of Records 2, Generation counter 2 00:26:58.155 =====Discovery Log Entry 0====== 00:26:58.155 trtype: tcp 00:26:58.155 adrfam: ipv4 00:26:58.155 subtype: current discovery subsystem 00:26:58.155 treq: not specified, sq flow control disable supported 00:26:58.155 portid: 1 00:26:58.155 trsvcid: 4420 00:26:58.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:58.155 traddr: 10.0.0.1 00:26:58.155 eflags: none 00:26:58.155 sectype: none 00:26:58.155 =====Discovery Log Entry 1====== 00:26:58.155 trtype: tcp 00:26:58.155 adrfam: ipv4 00:26:58.155 subtype: nvme subsystem 00:26:58.155 treq: not specified, sq flow control disable supported 00:26:58.155 portid: 1 00:26:58.155 trsvcid: 4420 00:26:58.155 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:58.155 traddr: 10.0.0.1 00:26:58.155 eflags: none 00:26:58.155 sectype: none 00:26:58.155 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:58.155 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:58.418 ===================================================== 00:26:58.418 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:58.418 ===================================================== 00:26:58.418 Controller Capabilities/Features 00:26:58.418 ================================ 00:26:58.418 Vendor ID: 0000 00:26:58.418 Subsystem Vendor ID: 0000 00:26:58.418 Serial Number: 9448171becbadf4864c6 00:26:58.418 Model Number: Linux 00:26:58.418 Firmware Version: 6.8.9-20 00:26:58.418 Recommended Arb Burst: 0 00:26:58.418 IEEE OUI Identifier: 00 00 00 00:26:58.418 Multi-path I/O 00:26:58.418 May have multiple subsystem ports: No 00:26:58.418 May have multiple controllers: No 00:26:58.418 Associated with SR-IOV VF: No 00:26:58.418 Max Data Transfer Size: Unlimited 00:26:58.418 Max Number of Namespaces: 0 00:26:58.418 Max Number of I/O Queues: 1024 00:26:58.418 NVMe Specification Version (VS): 1.3 00:26:58.418 NVMe Specification Version (Identify): 1.3 00:26:58.418 Maximum Queue Entries: 1024 00:26:58.418 Contiguous Queues Required: No 00:26:58.418 Arbitration Mechanisms Supported 00:26:58.418 Weighted Round Robin: Not Supported 00:26:58.418 Vendor Specific: Not Supported 00:26:58.418 Reset Timeout: 7500 ms 00:26:58.418 Doorbell Stride: 4 bytes 00:26:58.418 NVM Subsystem Reset: Not Supported 00:26:58.418 Command Sets Supported 00:26:58.418 NVM Command Set: Supported 00:26:58.418 Boot Partition: Not Supported 00:26:58.418 Memory Page Size Minimum: 4096 bytes 00:26:58.418 Memory Page Size Maximum: 4096 bytes 00:26:58.418 Persistent Memory Region: Not Supported 00:26:58.418 Optional Asynchronous Events Supported 00:26:58.418 Namespace Attribute Notices: Not Supported 00:26:58.418 Firmware Activation Notices: Not Supported 00:26:58.418 ANA Change Notices: Not Supported 00:26:58.418 PLE Aggregate Log Change Notices: Not Supported 00:26:58.418 LBA Status Info Alert Notices: Not Supported 00:26:58.418 EGE Aggregate Log Change Notices: Not Supported 00:26:58.418 Normal NVM Subsystem Shutdown event: Not Supported 00:26:58.418 Zone Descriptor Change Notices: Not Supported 00:26:58.418 Discovery Log Change Notices: Supported 00:26:58.418 Controller Attributes 00:26:58.418 128-bit Host Identifier: Not Supported 00:26:58.418 Non-Operational Permissive Mode: Not Supported 00:26:58.418 NVM Sets: Not Supported 00:26:58.418 Read Recovery Levels: Not Supported 00:26:58.418 Endurance Groups: Not Supported 00:26:58.418 Predictable Latency Mode: Not Supported 00:26:58.418 Traffic Based Keep ALive: Not Supported 00:26:58.418 Namespace Granularity: Not Supported 00:26:58.418 SQ Associations: Not Supported 00:26:58.418 UUID List: Not Supported 00:26:58.418 Multi-Domain Subsystem: Not Supported 00:26:58.418 Fixed Capacity Management: Not Supported 00:26:58.419 Variable Capacity Management: Not Supported 00:26:58.419 Delete Endurance Group: Not Supported 00:26:58.419 Delete NVM Set: Not Supported 00:26:58.419 Extended LBA Formats Supported: Not Supported 00:26:58.419 Flexible Data Placement Supported: Not Supported 00:26:58.419 00:26:58.419 Controller Memory Buffer Support 00:26:58.419 ================================ 00:26:58.419 Supported: No 00:26:58.419 00:26:58.419 Persistent Memory Region Support 00:26:58.419 ================================ 00:26:58.419 Supported: No 00:26:58.419 00:26:58.419 Admin Command Set Attributes 00:26:58.419 ============================ 00:26:58.419 Security Send/Receive: Not Supported 00:26:58.419 Format NVM: Not Supported 00:26:58.419 Firmware Activate/Download: Not Supported 00:26:58.419 Namespace Management: Not Supported 00:26:58.419 Device Self-Test: Not Supported 00:26:58.419 Directives: Not Supported 00:26:58.419 NVMe-MI: Not Supported 00:26:58.419 Virtualization Management: Not Supported 00:26:58.419 Doorbell Buffer Config: Not Supported 00:26:58.419 Get LBA Status Capability: Not Supported 00:26:58.419 Command & Feature Lockdown Capability: Not Supported 00:26:58.419 Abort Command Limit: 1 00:26:58.419 Async Event Request Limit: 1 00:26:58.419 Number of Firmware Slots: N/A 00:26:58.419 Firmware Slot 1 Read-Only: N/A 00:26:58.419 Firmware Activation Without Reset: N/A 00:26:58.419 Multiple Update Detection Support: N/A 00:26:58.419 Firmware Update Granularity: No Information Provided 00:26:58.419 Per-Namespace SMART Log: No 00:26:58.419 Asymmetric Namespace Access Log Page: Not Supported 00:26:58.419 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:58.419 Command Effects Log Page: Not Supported 00:26:58.419 Get Log Page Extended Data: Supported 00:26:58.419 Telemetry Log Pages: Not Supported 00:26:58.419 Persistent Event Log Pages: Not Supported 00:26:58.419 Supported Log Pages Log Page: May Support 00:26:58.419 Commands Supported & Effects Log Page: Not Supported 00:26:58.419 Feature Identifiers & Effects Log Page:May Support 00:26:58.419 NVMe-MI Commands & Effects Log Page: May Support 00:26:58.419 Data Area 4 for Telemetry Log: Not Supported 00:26:58.419 Error Log Page Entries Supported: 1 00:26:58.419 Keep Alive: Not Supported 00:26:58.419 00:26:58.419 NVM Command Set Attributes 00:26:58.419 ========================== 00:26:58.419 Submission Queue Entry Size 00:26:58.419 Max: 1 00:26:58.419 Min: 1 00:26:58.419 Completion Queue Entry Size 00:26:58.419 Max: 1 00:26:58.419 Min: 1 00:26:58.419 Number of Namespaces: 0 00:26:58.419 Compare Command: Not Supported 00:26:58.419 Write Uncorrectable Command: Not Supported 00:26:58.419 Dataset Management Command: Not Supported 00:26:58.419 Write Zeroes Command: Not Supported 00:26:58.419 Set Features Save Field: Not Supported 00:26:58.419 Reservations: Not Supported 00:26:58.419 Timestamp: Not Supported 00:26:58.419 Copy: Not Supported 00:26:58.419 Volatile Write Cache: Not Present 00:26:58.419 Atomic Write Unit (Normal): 1 00:26:58.419 Atomic Write Unit (PFail): 1 00:26:58.419 Atomic Compare & Write Unit: 1 00:26:58.419 Fused Compare & Write: Not Supported 00:26:58.419 Scatter-Gather List 00:26:58.419 SGL Command Set: Supported 00:26:58.419 SGL Keyed: Not Supported 00:26:58.419 SGL Bit Bucket Descriptor: Not Supported 00:26:58.419 SGL Metadata Pointer: Not Supported 00:26:58.419 Oversized SGL: Not Supported 00:26:58.419 SGL Metadata Address: Not Supported 00:26:58.419 SGL Offset: Supported 00:26:58.419 Transport SGL Data Block: Not Supported 00:26:58.419 Replay Protected Memory Block: Not Supported 00:26:58.419 00:26:58.419 Firmware Slot Information 00:26:58.419 ========================= 00:26:58.419 Active slot: 0 00:26:58.419 00:26:58.419 00:26:58.419 Error Log 00:26:58.419 ========= 00:26:58.419 00:26:58.419 Active Namespaces 00:26:58.419 ================= 00:26:58.419 Discovery Log Page 00:26:58.419 ================== 00:26:58.419 Generation Counter: 2 00:26:58.419 Number of Records: 2 00:26:58.419 Record Format: 0 00:26:58.419 00:26:58.419 Discovery Log Entry 0 00:26:58.419 ---------------------- 00:26:58.419 Transport Type: 3 (TCP) 00:26:58.419 Address Family: 1 (IPv4) 00:26:58.419 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:58.419 Entry Flags: 00:26:58.419 Duplicate Returned Information: 0 00:26:58.419 Explicit Persistent Connection Support for Discovery: 0 00:26:58.419 Transport Requirements: 00:26:58.419 Secure Channel: Not Specified 00:26:58.419 Port ID: 1 (0x0001) 00:26:58.419 Controller ID: 65535 (0xffff) 00:26:58.419 Admin Max SQ Size: 32 00:26:58.419 Transport Service Identifier: 4420 00:26:58.419 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:58.419 Transport Address: 10.0.0.1 00:26:58.419 Discovery Log Entry 1 00:26:58.419 ---------------------- 00:26:58.419 Transport Type: 3 (TCP) 00:26:58.419 Address Family: 1 (IPv4) 00:26:58.419 Subsystem Type: 2 (NVM Subsystem) 00:26:58.419 Entry Flags: 00:26:58.419 Duplicate Returned Information: 0 00:26:58.419 Explicit Persistent Connection Support for Discovery: 0 00:26:58.419 Transport Requirements: 00:26:58.419 Secure Channel: Not Specified 00:26:58.419 Port ID: 1 (0x0001) 00:26:58.419 Controller ID: 65535 (0xffff) 00:26:58.419 Admin Max SQ Size: 32 00:26:58.419 Transport Service Identifier: 4420 00:26:58.419 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:58.419 Transport Address: 10.0.0.1 00:26:58.419 12:04:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.419 get_feature(0x01) failed 00:26:58.419 get_feature(0x02) failed 00:26:58.419 get_feature(0x04) failed 00:26:58.419 ===================================================== 00:26:58.419 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:58.419 ===================================================== 00:26:58.419 Controller Capabilities/Features 00:26:58.419 ================================ 00:26:58.419 Vendor ID: 0000 00:26:58.419 Subsystem Vendor ID: 0000 00:26:58.419 Serial Number: 2185f0f76cb56bbf856d 00:26:58.419 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:58.419 Firmware Version: 6.8.9-20 00:26:58.419 Recommended Arb Burst: 6 00:26:58.419 IEEE OUI Identifier: 00 00 00 00:26:58.419 Multi-path I/O 00:26:58.419 May have multiple subsystem ports: Yes 00:26:58.419 May have multiple controllers: Yes 00:26:58.419 Associated with SR-IOV VF: No 00:26:58.419 Max Data Transfer Size: Unlimited 00:26:58.419 Max Number of Namespaces: 1024 00:26:58.419 Max Number of I/O Queues: 128 00:26:58.419 NVMe Specification Version (VS): 1.3 00:26:58.419 NVMe Specification Version (Identify): 1.3 00:26:58.419 Maximum Queue Entries: 1024 00:26:58.419 Contiguous Queues Required: No 00:26:58.419 Arbitration Mechanisms Supported 00:26:58.419 Weighted Round Robin: Not Supported 00:26:58.419 Vendor Specific: Not Supported 00:26:58.419 Reset Timeout: 7500 ms 00:26:58.419 Doorbell Stride: 4 bytes 00:26:58.419 NVM Subsystem Reset: Not Supported 00:26:58.419 Command Sets Supported 00:26:58.419 NVM Command Set: Supported 00:26:58.419 Boot Partition: Not Supported 00:26:58.419 Memory Page Size Minimum: 4096 bytes 00:26:58.419 Memory Page Size Maximum: 4096 bytes 00:26:58.419 Persistent Memory Region: Not Supported 00:26:58.419 Optional Asynchronous Events Supported 00:26:58.419 Namespace Attribute Notices: Supported 00:26:58.419 Firmware Activation Notices: Not Supported 00:26:58.419 ANA Change Notices: Supported 00:26:58.419 PLE Aggregate Log Change Notices: Not Supported 00:26:58.419 LBA Status Info Alert Notices: Not Supported 00:26:58.419 EGE Aggregate Log Change Notices: Not Supported 00:26:58.419 Normal NVM Subsystem Shutdown event: Not Supported 00:26:58.419 Zone Descriptor Change Notices: Not Supported 00:26:58.419 Discovery Log Change Notices: Not Supported 00:26:58.419 Controller Attributes 00:26:58.419 128-bit Host Identifier: Supported 00:26:58.419 Non-Operational Permissive Mode: Not Supported 00:26:58.419 NVM Sets: Not Supported 00:26:58.419 Read Recovery Levels: Not Supported 00:26:58.419 Endurance Groups: Not Supported 00:26:58.419 Predictable Latency Mode: Not Supported 00:26:58.419 Traffic Based Keep ALive: Supported 00:26:58.419 Namespace Granularity: Not Supported 00:26:58.419 SQ Associations: Not Supported 00:26:58.419 UUID List: Not Supported 00:26:58.419 Multi-Domain Subsystem: Not Supported 00:26:58.419 Fixed Capacity Management: Not Supported 00:26:58.419 Variable Capacity Management: Not Supported 00:26:58.419 Delete Endurance Group: Not Supported 00:26:58.419 Delete NVM Set: Not Supported 00:26:58.419 Extended LBA Formats Supported: Not Supported 00:26:58.419 Flexible Data Placement Supported: Not Supported 00:26:58.419 00:26:58.419 Controller Memory Buffer Support 00:26:58.419 ================================ 00:26:58.419 Supported: No 00:26:58.419 00:26:58.419 Persistent Memory Region Support 00:26:58.420 ================================ 00:26:58.420 Supported: No 00:26:58.420 00:26:58.420 Admin Command Set Attributes 00:26:58.420 ============================ 00:26:58.420 Security Send/Receive: Not Supported 00:26:58.420 Format NVM: Not Supported 00:26:58.420 Firmware Activate/Download: Not Supported 00:26:58.420 Namespace Management: Not Supported 00:26:58.420 Device Self-Test: Not Supported 00:26:58.420 Directives: Not Supported 00:26:58.420 NVMe-MI: Not Supported 00:26:58.420 Virtualization Management: Not Supported 00:26:58.420 Doorbell Buffer Config: Not Supported 00:26:58.420 Get LBA Status Capability: Not Supported 00:26:58.420 Command & Feature Lockdown Capability: Not Supported 00:26:58.420 Abort Command Limit: 4 00:26:58.420 Async Event Request Limit: 4 00:26:58.420 Number of Firmware Slots: N/A 00:26:58.420 Firmware Slot 1 Read-Only: N/A 00:26:58.420 Firmware Activation Without Reset: N/A 00:26:58.420 Multiple Update Detection Support: N/A 00:26:58.420 Firmware Update Granularity: No Information Provided 00:26:58.420 Per-Namespace SMART Log: Yes 00:26:58.420 Asymmetric Namespace Access Log Page: Supported 00:26:58.420 ANA Transition Time : 10 sec 00:26:58.420 00:26:58.420 Asymmetric Namespace Access Capabilities 00:26:58.420 ANA Optimized State : Supported 00:26:58.420 ANA Non-Optimized State : Supported 00:26:58.420 ANA Inaccessible State : Supported 00:26:58.420 ANA Persistent Loss State : Supported 00:26:58.420 ANA Change State : Supported 00:26:58.420 ANAGRPID is not changed : No 00:26:58.420 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:58.420 00:26:58.420 ANA Group Identifier Maximum : 128 00:26:58.420 Number of ANA Group Identifiers : 128 00:26:58.420 Max Number of Allowed Namespaces : 1024 00:26:58.420 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:58.420 Command Effects Log Page: Supported 00:26:58.420 Get Log Page Extended Data: Supported 00:26:58.420 Telemetry Log Pages: Not Supported 00:26:58.420 Persistent Event Log Pages: Not Supported 00:26:58.420 Supported Log Pages Log Page: May Support 00:26:58.420 Commands Supported & Effects Log Page: Not Supported 00:26:58.420 Feature Identifiers & Effects Log Page:May Support 00:26:58.420 NVMe-MI Commands & Effects Log Page: May Support 00:26:58.420 Data Area 4 for Telemetry Log: Not Supported 00:26:58.420 Error Log Page Entries Supported: 128 00:26:58.420 Keep Alive: Supported 00:26:58.420 Keep Alive Granularity: 1000 ms 00:26:58.420 00:26:58.420 NVM Command Set Attributes 00:26:58.420 ========================== 00:26:58.420 Submission Queue Entry Size 00:26:58.420 Max: 64 00:26:58.420 Min: 64 00:26:58.420 Completion Queue Entry Size 00:26:58.420 Max: 16 00:26:58.420 Min: 16 00:26:58.420 Number of Namespaces: 1024 00:26:58.420 Compare Command: Not Supported 00:26:58.420 Write Uncorrectable Command: Not Supported 00:26:58.420 Dataset Management Command: Supported 00:26:58.420 Write Zeroes Command: Supported 00:26:58.420 Set Features Save Field: Not Supported 00:26:58.420 Reservations: Not Supported 00:26:58.420 Timestamp: Not Supported 00:26:58.420 Copy: Not Supported 00:26:58.420 Volatile Write Cache: Present 00:26:58.420 Atomic Write Unit (Normal): 1 00:26:58.420 Atomic Write Unit (PFail): 1 00:26:58.420 Atomic Compare & Write Unit: 1 00:26:58.420 Fused Compare & Write: Not Supported 00:26:58.420 Scatter-Gather List 00:26:58.420 SGL Command Set: Supported 00:26:58.420 SGL Keyed: Not Supported 00:26:58.420 SGL Bit Bucket Descriptor: Not Supported 00:26:58.420 SGL Metadata Pointer: Not Supported 00:26:58.420 Oversized SGL: Not Supported 00:26:58.420 SGL Metadata Address: Not Supported 00:26:58.420 SGL Offset: Supported 00:26:58.420 Transport SGL Data Block: Not Supported 00:26:58.420 Replay Protected Memory Block: Not Supported 00:26:58.420 00:26:58.420 Firmware Slot Information 00:26:58.420 ========================= 00:26:58.420 Active slot: 0 00:26:58.420 00:26:58.420 Asymmetric Namespace Access 00:26:58.420 =========================== 00:26:58.420 Change Count : 0 00:26:58.420 Number of ANA Group Descriptors : 1 00:26:58.420 ANA Group Descriptor : 0 00:26:58.420 ANA Group ID : 1 00:26:58.420 Number of NSID Values : 1 00:26:58.420 Change Count : 0 00:26:58.420 ANA State : 1 00:26:58.420 Namespace Identifier : 1 00:26:58.420 00:26:58.420 Commands Supported and Effects 00:26:58.420 ============================== 00:26:58.420 Admin Commands 00:26:58.420 -------------- 00:26:58.420 Get Log Page (02h): Supported 00:26:58.420 Identify (06h): Supported 00:26:58.420 Abort (08h): Supported 00:26:58.420 Set Features (09h): Supported 00:26:58.420 Get Features (0Ah): Supported 00:26:58.420 Asynchronous Event Request (0Ch): Supported 00:26:58.420 Keep Alive (18h): Supported 00:26:58.420 I/O Commands 00:26:58.420 ------------ 00:26:58.420 Flush (00h): Supported 00:26:58.420 Write (01h): Supported LBA-Change 00:26:58.420 Read (02h): Supported 00:26:58.420 Write Zeroes (08h): Supported LBA-Change 00:26:58.420 Dataset Management (09h): Supported 00:26:58.420 00:26:58.420 Error Log 00:26:58.420 ========= 00:26:58.420 Entry: 0 00:26:58.420 Error Count: 0x3 00:26:58.420 Submission Queue Id: 0x0 00:26:58.420 Command Id: 0x5 00:26:58.420 Phase Bit: 0 00:26:58.420 Status Code: 0x2 00:26:58.420 Status Code Type: 0x0 00:26:58.420 Do Not Retry: 1 00:26:58.420 Error Location: 0x28 00:26:58.420 LBA: 0x0 00:26:58.420 Namespace: 0x0 00:26:58.420 Vendor Log Page: 0x0 00:26:58.420 ----------- 00:26:58.420 Entry: 1 00:26:58.420 Error Count: 0x2 00:26:58.420 Submission Queue Id: 0x0 00:26:58.420 Command Id: 0x5 00:26:58.420 Phase Bit: 0 00:26:58.420 Status Code: 0x2 00:26:58.420 Status Code Type: 0x0 00:26:58.420 Do Not Retry: 1 00:26:58.420 Error Location: 0x28 00:26:58.420 LBA: 0x0 00:26:58.420 Namespace: 0x0 00:26:58.420 Vendor Log Page: 0x0 00:26:58.420 ----------- 00:26:58.420 Entry: 2 00:26:58.420 Error Count: 0x1 00:26:58.420 Submission Queue Id: 0x0 00:26:58.420 Command Id: 0x4 00:26:58.420 Phase Bit: 0 00:26:58.420 Status Code: 0x2 00:26:58.420 Status Code Type: 0x0 00:26:58.420 Do Not Retry: 1 00:26:58.420 Error Location: 0x28 00:26:58.420 LBA: 0x0 00:26:58.420 Namespace: 0x0 00:26:58.420 Vendor Log Page: 0x0 00:26:58.420 00:26:58.420 Number of Queues 00:26:58.420 ================ 00:26:58.420 Number of I/O Submission Queues: 128 00:26:58.420 Number of I/O Completion Queues: 128 00:26:58.420 00:26:58.420 ZNS Specific Controller Data 00:26:58.420 ============================ 00:26:58.420 Zone Append Size Limit: 0 00:26:58.420 00:26:58.420 00:26:58.420 Active Namespaces 00:26:58.420 ================= 00:26:58.420 get_feature(0x05) failed 00:26:58.420 Namespace ID:1 00:26:58.420 Command Set Identifier: NVM (00h) 00:26:58.420 Deallocate: Supported 00:26:58.420 Deallocated/Unwritten Error: Not Supported 00:26:58.420 Deallocated Read Value: Unknown 00:26:58.420 Deallocate in Write Zeroes: Not Supported 00:26:58.420 Deallocated Guard Field: 0xFFFF 00:26:58.420 Flush: Supported 00:26:58.420 Reservation: Not Supported 00:26:58.420 Namespace Sharing Capabilities: Multiple Controllers 00:26:58.420 Size (in LBAs): 3750748848 (1788GiB) 00:26:58.420 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:58.420 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:58.420 UUID: 7f7b4a5c-f7b9-4d3d-834a-469e8051c10e 00:26:58.420 Thin Provisioning: Not Supported 00:26:58.420 Per-NS Atomic Units: Yes 00:26:58.420 Atomic Write Unit (Normal): 8 00:26:58.420 Atomic Write Unit (PFail): 8 00:26:58.420 Preferred Write Granularity: 8 00:26:58.420 Atomic Compare & Write Unit: 8 00:26:58.420 Atomic Boundary Size (Normal): 0 00:26:58.420 Atomic Boundary Size (PFail): 0 00:26:58.420 Atomic Boundary Offset: 0 00:26:58.420 NGUID/EUI64 Never Reused: No 00:26:58.420 ANA group ID: 1 00:26:58.420 Namespace Write Protected: No 00:26:58.420 Number of LBA Formats: 1 00:26:58.420 Current LBA Format: LBA Format #00 00:26:58.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:58.420 00:26:58.420 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:58.420 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:58.420 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:58.420 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:58.420 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:58.420 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:58.420 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:58.420 rmmod nvme_tcp 00:26:58.420 rmmod nvme_fabrics 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.421 12:04:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.966 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.966 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:00.966 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:00.966 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:27:00.966 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.967 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:00.967 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:00.967 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.967 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:00.967 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:00.967 12:04:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:04.276 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:04.537 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:04.537 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:04.537 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:04.537 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:04.537 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:04.537 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:04.537 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:04.537 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:04.797 00:27:04.797 real 0m20.053s 00:27:04.797 user 0m5.396s 00:27:04.797 sys 0m11.686s 00:27:04.797 12:04:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:04.797 12:04:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.797 ************************************ 00:27:04.797 END TEST nvmf_identify_kernel_target 00:27:04.797 ************************************ 00:27:05.073 12:04:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:05.073 12:04:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:05.073 12:04:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.074 ************************************ 00:27:05.074 START TEST nvmf_auth_host 00:27:05.074 ************************************ 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:05.074 * Looking for test storage... 00:27:05.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:05.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.074 --rc genhtml_branch_coverage=1 00:27:05.074 --rc genhtml_function_coverage=1 00:27:05.074 --rc genhtml_legend=1 00:27:05.074 --rc geninfo_all_blocks=1 00:27:05.074 --rc geninfo_unexecuted_blocks=1 00:27:05.074 00:27:05.074 ' 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:05.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.074 --rc genhtml_branch_coverage=1 00:27:05.074 --rc genhtml_function_coverage=1 00:27:05.074 --rc genhtml_legend=1 00:27:05.074 --rc geninfo_all_blocks=1 00:27:05.074 --rc geninfo_unexecuted_blocks=1 00:27:05.074 00:27:05.074 ' 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:05.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.074 --rc genhtml_branch_coverage=1 00:27:05.074 --rc genhtml_function_coverage=1 00:27:05.074 --rc genhtml_legend=1 00:27:05.074 --rc geninfo_all_blocks=1 00:27:05.074 --rc geninfo_unexecuted_blocks=1 00:27:05.074 00:27:05.074 ' 00:27:05.074 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:05.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.075 --rc genhtml_branch_coverage=1 00:27:05.075 --rc genhtml_function_coverage=1 00:27:05.075 --rc genhtml_legend=1 00:27:05.075 --rc geninfo_all_blocks=1 00:27:05.075 --rc geninfo_unexecuted_blocks=1 00:27:05.075 00:27:05.075 ' 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.075 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:05.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:05.408 12:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.597 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.597 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:13.597 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:13.597 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:13.597 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:13.598 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:13.598 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:13.598 Found net devices under 0000:31:00.0: cvl_0_0 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:13.598 Found net devices under 0000:31:00.1: cvl_0_1 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:13.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:27:13.598 00:27:13.598 --- 10.0.0.2 ping statistics --- 00:27:13.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.598 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:27:13.598 00:27:13.598 --- 10.0.0.1 ping statistics --- 00:27:13.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.598 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:13.598 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=2080289 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 2080289 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2080289 ']' 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.599 12:04:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=37515043882873c48de12e2f26345e4f 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Ie1 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 37515043882873c48de12e2f26345e4f 0 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 37515043882873c48de12e2f26345e4f 0 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=37515043882873c48de12e2f26345e4f 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Ie1 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Ie1 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Ie1 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f36bc4976b7160c02139160d2ab2d34f1d2b848d910e45b51a5be080cf79b75e 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.TMQ 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f36bc4976b7160c02139160d2ab2d34f1d2b848d910e45b51a5be080cf79b75e 3 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f36bc4976b7160c02139160d2ab2d34f1d2b848d910e45b51a5be080cf79b75e 3 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f36bc4976b7160c02139160d2ab2d34f1d2b848d910e45b51a5be080cf79b75e 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:13.861 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.TMQ 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.TMQ 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.TMQ 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b93d0cf7111b05f9a7a5a8128e148b9b62c49b25ce900633 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.EKp 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b93d0cf7111b05f9a7a5a8128e148b9b62c49b25ce900633 0 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b93d0cf7111b05f9a7a5a8128e148b9b62c49b25ce900633 0 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b93d0cf7111b05f9a7a5a8128e148b9b62c49b25ce900633 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.EKp 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.EKp 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.EKp 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e66e2950320b1580d68df98212b479b2b5d81f3448b98ea0 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.idq 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e66e2950320b1580d68df98212b479b2b5d81f3448b98ea0 2 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e66e2950320b1580d68df98212b479b2b5d81f3448b98ea0 2 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e66e2950320b1580d68df98212b479b2b5d81f3448b98ea0 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.idq 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.idq 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.idq 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5f6501c1981ae53e3d2e69000ce76931 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.MUy 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5f6501c1981ae53e3d2e69000ce76931 1 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5f6501c1981ae53e3d2e69000ce76931 1 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5f6501c1981ae53e3d2e69000ce76931 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.MUy 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.MUy 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.MUy 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:14.123 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=179421f80b7ee48e0030541d46ffc7f8 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.sm1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 179421f80b7ee48e0030541d46ffc7f8 1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 179421f80b7ee48e0030541d46ffc7f8 1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=179421f80b7ee48e0030541d46ffc7f8 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.sm1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.sm1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.sm1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e8680efadd4ec2952a1329dae8a15cb1f7c5cd4e1c6ea303 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.CLE 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e8680efadd4ec2952a1329dae8a15cb1f7c5cd4e1c6ea303 2 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e8680efadd4ec2952a1329dae8a15cb1f7c5cd4e1c6ea303 2 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e8680efadd4ec2952a1329dae8a15cb1f7c5cd4e1c6ea303 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.CLE 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.CLE 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CLE 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0aac8fe092a40c16fd4dcbf6145d7ecc 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.SsX 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0aac8fe092a40c16fd4dcbf6145d7ecc 0 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0aac8fe092a40c16fd4dcbf6145d7ecc 0 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0aac8fe092a40c16fd4dcbf6145d7ecc 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:14.385 12:04:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.SsX 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.SsX 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.SsX 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:14.385 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ba1b1220121eaedc46b5c760c80c8514819393616ee4815b0db824987b8bc08f 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.qLS 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ba1b1220121eaedc46b5c760c80c8514819393616ee4815b0db824987b8bc08f 3 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ba1b1220121eaedc46b5c760c80c8514819393616ee4815b0db824987b8bc08f 3 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ba1b1220121eaedc46b5c760c80c8514819393616ee4815b0db824987b8bc08f 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:14.386 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.qLS 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.qLS 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qLS 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2080289 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2080289 ']' 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ie1 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.647 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.TMQ ]] 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TMQ 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.EKp 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.idq ]] 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.idq 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.MUy 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.648 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.sm1 ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sm1 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CLE 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.SsX ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.SsX 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qLS 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:14.909 12:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:18.212 Waiting for block devices as requested 00:27:18.474 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:18.474 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:18.474 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:18.474 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:18.734 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:18.734 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:18.734 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:18.996 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:18.996 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:19.256 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:19.257 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:19.257 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:19.257 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:19.518 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:19.518 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:19.518 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:19.518 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:20.462 No valid GPT data, bailing 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:27:20.462 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:20.723 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:20.723 00:27:20.723 Discovery Log Number of Records 2, Generation counter 2 00:27:20.723 =====Discovery Log Entry 0====== 00:27:20.723 trtype: tcp 00:27:20.723 adrfam: ipv4 00:27:20.723 subtype: current discovery subsystem 00:27:20.723 treq: not specified, sq flow control disable supported 00:27:20.723 portid: 1 00:27:20.723 trsvcid: 4420 00:27:20.723 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:20.723 traddr: 10.0.0.1 00:27:20.723 eflags: none 00:27:20.723 sectype: none 00:27:20.723 =====Discovery Log Entry 1====== 00:27:20.723 trtype: tcp 00:27:20.723 adrfam: ipv4 00:27:20.723 subtype: nvme subsystem 00:27:20.723 treq: not specified, sq flow control disable supported 00:27:20.723 portid: 1 00:27:20.724 trsvcid: 4420 00:27:20.724 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:20.724 traddr: 10.0.0.1 00:27:20.724 eflags: none 00:27:20.724 sectype: none 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.724 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.986 nvme0n1 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.986 nvme0n1 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.986 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.248 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.249 nvme0n1 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.249 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.511 12:04:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.511 nvme0n1 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.511 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.512 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.512 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:21.512 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.512 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.512 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.512 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.772 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 nvme0n1 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.773 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.034 nvme0n1 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.034 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.295 nvme0n1 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:22.295 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.296 12:04:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.558 nvme0n1 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.558 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.820 nvme0n1 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:22.820 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.821 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.082 nvme0n1 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.082 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.343 nvme0n1 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.343 12:04:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.603 nvme0n1 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.603 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.864 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 nvme0n1 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.126 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.387 nvme0n1 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:24.387 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.388 12:04:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.388 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.649 nvme0n1 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.649 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.910 nvme0n1 00:27:24.910 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.910 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.910 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.910 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.910 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.170 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.171 12:04:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.431 nvme0n1 00:27:25.431 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.431 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.431 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.432 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.432 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.432 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.697 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.959 nvme0n1 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.959 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.960 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.960 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.960 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.960 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.220 12:04:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 nvme0n1 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:26.480 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.481 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.051 nvme0n1 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.051 12:04:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.623 nvme0n1 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.623 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.194 nvme0n1 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.194 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.454 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.454 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.454 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.454 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.455 12:04:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.025 nvme0n1 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.025 12:04:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.595 nvme0n1 00:27:29.595 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.595 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.595 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.595 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.595 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.595 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.856 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.426 nvme0n1 00:27:30.426 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.426 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.426 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.426 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.426 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.426 12:04:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.426 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.366 nvme0n1 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.366 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.367 nvme0n1 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.367 12:04:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.367 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.628 nvme0n1 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:31.628 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.629 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.889 nvme0n1 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.889 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.890 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.890 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.890 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.890 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.890 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.150 nvme0n1 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.150 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.410 nvme0n1 00:27:32.410 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.410 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.410 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.410 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.411 12:04:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.671 nvme0n1 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.671 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.932 nvme0n1 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.932 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.193 nvme0n1 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.193 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.454 nvme0n1 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.454 12:04:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.454 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.714 nvme0n1 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.714 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.975 nvme0n1 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.975 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.236 nvme0n1 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.236 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.496 12:04:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.757 nvme0n1 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.757 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.018 nvme0n1 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.018 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.019 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.280 nvme0n1 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.280 12:04:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.851 nvme0n1 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.851 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.422 nvme0n1 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.422 12:04:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.683 nvme0n1 00:27:36.683 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.683 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.683 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.683 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.683 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.943 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.944 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.204 nvme0n1 00:27:37.204 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.204 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.204 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.204 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.204 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.204 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.465 12:04:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.725 nvme0n1 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.725 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.986 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.987 12:04:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.557 nvme0n1 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.557 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.558 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.130 nvme0n1 00:27:39.130 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.130 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.130 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.130 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.130 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.390 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.390 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.390 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.390 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.390 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.390 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.390 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.391 12:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.961 nvme0n1 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.961 12:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.903 nvme0n1 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:40.903 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.904 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.475 nvme0n1 00:27:41.475 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.475 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.475 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.475 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.475 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.475 12:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.475 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.736 nvme0n1 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.736 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.997 nvme0n1 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:41.997 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.998 nvme0n1 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.998 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.258 nvme0n1 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.258 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.519 12:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.519 nvme0n1 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.519 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.781 nvme0n1 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:42.781 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.041 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.042 nvme0n1 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.042 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 nvme0n1 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.313 12:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.313 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.575 nvme0n1 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.575 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.836 nvme0n1 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.836 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.097 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.358 nvme0n1 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.358 12:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.618 nvme0n1 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:44.618 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.619 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.879 nvme0n1 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.879 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.139 nvme0n1 00:27:45.139 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.139 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.139 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.139 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.139 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.400 12:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.661 nvme0n1 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.661 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.662 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.232 nvme0n1 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.232 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.233 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:46.233 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.233 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:46.233 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:46.233 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:46.233 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.233 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.233 12:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.492 nvme0n1 00:27:46.493 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.493 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.493 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.493 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.493 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.493 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.753 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.014 nvme0n1 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.014 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.275 12:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.536 nvme0n1 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.537 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.809 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.078 nvme0n1 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.078 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzc1MTUwNDM4ODI4NzNjNDhkZTEyZTJmMjYzNDVlNGbneIph: 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: ]] 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjM2YmM0OTc2YjcxNjBjMDIxMzkxNjBkMmFiMmQzNGYxZDJiODQ4ZDkxMGU0NWI1MWE1YmUwODBjZjc5Yjc1ZfWOGww=: 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.079 12:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.760 nvme0n1 00:27:48.760 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.760 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.760 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.760 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.760 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.760 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.761 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.035 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.035 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.036 12:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.607 nvme0n1 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.607 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.608 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.179 nvme0n1 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.179 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTg2ODBlZmFkZDRlYzI5NTJhMTMyOWRhZThhMTVjYjFmN2M1Y2Q0ZTFjNmVhMzAzHKBfvQ==: 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: ]] 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGFhYzhmZTA5MmE0MGMxNmZkNGRjYmY2MTQ1ZDdlY2MDw7kf: 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.440 12:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.012 nvme0n1 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmExYjEyMjAxMjFlYWVkYzQ2YjVjNzYwYzgwYzg1MTQ4MTkzOTM2MTZlZTQ4MTViMGRiODI0OTg3YjhiYzA4Zg2II6E=: 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:51.012 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:51.013 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.013 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.013 12:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.584 nvme0n1 00:27:51.584 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.585 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.585 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.585 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.585 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.585 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.846 request: 00:27:51.846 { 00:27:51.846 "name": "nvme0", 00:27:51.846 "trtype": "tcp", 00:27:51.846 "traddr": "10.0.0.1", 00:27:51.846 "adrfam": "ipv4", 00:27:51.846 "trsvcid": "4420", 00:27:51.846 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:51.846 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:51.846 "prchk_reftag": false, 00:27:51.846 "prchk_guard": false, 00:27:51.846 "hdgst": false, 00:27:51.846 "ddgst": false, 00:27:51.846 "allow_unrecognized_csi": false, 00:27:51.846 "method": "bdev_nvme_attach_controller", 00:27:51.846 "req_id": 1 00:27:51.846 } 00:27:51.846 Got JSON-RPC error response 00:27:51.846 response: 00:27:51.846 { 00:27:51.846 "code": -5, 00:27:51.846 "message": "Input/output error" 00:27:51.846 } 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:51.846 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.847 request: 00:27:51.847 { 00:27:51.847 "name": "nvme0", 00:27:51.847 "trtype": "tcp", 00:27:51.847 "traddr": "10.0.0.1", 00:27:51.847 "adrfam": "ipv4", 00:27:51.847 "trsvcid": "4420", 00:27:51.847 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:51.847 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:51.847 "prchk_reftag": false, 00:27:51.847 "prchk_guard": false, 00:27:51.847 "hdgst": false, 00:27:51.847 "ddgst": false, 00:27:51.847 "dhchap_key": "key2", 00:27:51.847 "allow_unrecognized_csi": false, 00:27:51.847 "method": "bdev_nvme_attach_controller", 00:27:51.847 "req_id": 1 00:27:51.847 } 00:27:51.847 Got JSON-RPC error response 00:27:51.847 response: 00:27:51.847 { 00:27:51.847 "code": -5, 00:27:51.847 "message": "Input/output error" 00:27:51.847 } 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.847 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.108 request: 00:27:52.108 { 00:27:52.108 "name": "nvme0", 00:27:52.108 "trtype": "tcp", 00:27:52.108 "traddr": "10.0.0.1", 00:27:52.108 "adrfam": "ipv4", 00:27:52.108 "trsvcid": "4420", 00:27:52.108 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.108 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.108 "prchk_reftag": false, 00:27:52.108 "prchk_guard": false, 00:27:52.108 "hdgst": false, 00:27:52.108 "ddgst": false, 00:27:52.108 "dhchap_key": "key1", 00:27:52.108 "dhchap_ctrlr_key": "ckey2", 00:27:52.108 "allow_unrecognized_csi": false, 00:27:52.108 "method": "bdev_nvme_attach_controller", 00:27:52.108 "req_id": 1 00:27:52.108 } 00:27:52.108 Got JSON-RPC error response 00:27:52.108 response: 00:27:52.108 { 00:27:52.108 "code": -5, 00:27:52.108 "message": "Input/output error" 00:27:52.108 } 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:52.108 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:52.109 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:52.109 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.109 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.370 nvme0n1 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.370 request: 00:27:52.370 { 00:27:52.370 "name": "nvme0", 00:27:52.370 "dhchap_key": "key1", 00:27:52.370 "dhchap_ctrlr_key": "ckey2", 00:27:52.370 "method": "bdev_nvme_set_keys", 00:27:52.370 "req_id": 1 00:27:52.370 } 00:27:52.370 Got JSON-RPC error response 00:27:52.370 response: 00:27:52.370 { 00:27:52.370 "code": -13, 00:27:52.370 "message": "Permission denied" 00:27:52.370 } 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.370 12:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.370 12:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.370 12:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:52.371 12:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:53.754 12:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.754 12:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:53.754 12:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.754 12:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.754 12:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.754 12:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:53.754 12:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:54.697 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.697 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:54.697 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.697 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.697 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.697 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:54.697 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:54.697 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjkzZDBjZjcxMTFiMDVmOWE3YTVhODEyOGUxNDhiOWI2MmM0OWIyNWNlOTAwNjMzI5RO8A==: 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: ]] 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2ZTI5NTAzMjBiMTU4MGQ2OGRmOTgyMTJiNDc5YjJiNWQ4MWYzNDQ4Yjk4ZWEwlq458g==: 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.698 nvme0n1 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY2NTAxYzE5ODFhZTUzZTNkMmU2OTAwMGNlNzY5MzExh1g7: 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: ]] 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc5NDIxZjgwYjdlZTQ4ZTAwMzA1NDFkNDZmZmM3Zjj8twwa: 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.698 request: 00:27:54.698 { 00:27:54.698 "name": "nvme0", 00:27:54.698 "dhchap_key": "key2", 00:27:54.698 "dhchap_ctrlr_key": "ckey1", 00:27:54.698 "method": "bdev_nvme_set_keys", 00:27:54.698 "req_id": 1 00:27:54.698 } 00:27:54.698 Got JSON-RPC error response 00:27:54.698 response: 00:27:54.698 { 00:27:54.698 "code": -13, 00:27:54.698 "message": "Permission denied" 00:27:54.698 } 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.698 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.959 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:54.959 12:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:55.900 rmmod nvme_tcp 00:27:55.900 rmmod nvme_fabrics 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 2080289 ']' 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 2080289 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2080289 ']' 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2080289 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2080289 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2080289' 00:27:55.900 killing process with pid 2080289 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2080289 00:27:55.900 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2080289 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.162 12:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.076 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.076 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:58.076 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.076 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:58.076 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:58.076 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:58.337 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.337 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:58.337 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:58.337 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.337 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:58.337 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:58.337 12:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:02.547 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.547 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:02.547 12:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Ie1 /tmp/spdk.key-null.EKp /tmp/spdk.key-sha256.MUy /tmp/spdk.key-sha384.CLE /tmp/spdk.key-sha512.qLS /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:02.547 12:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:05.850 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:05.850 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:05.850 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:06.110 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:06.110 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:06.110 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:06.110 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:06.111 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:06.372 00:28:06.372 real 1m1.376s 00:28:06.372 user 0m54.928s 00:28:06.372 sys 0m16.429s 00:28:06.372 12:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.372 12:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.372 ************************************ 00:28:06.372 END TEST nvmf_auth_host 00:28:06.372 ************************************ 00:28:06.372 12:05:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:06.372 12:05:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:06.372 12:05:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:06.372 12:05:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.372 12:05:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.372 ************************************ 00:28:06.372 START TEST nvmf_digest 00:28:06.372 ************************************ 00:28:06.372 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:06.633 * Looking for test storage... 00:28:06.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.633 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:06.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.634 --rc genhtml_branch_coverage=1 00:28:06.634 --rc genhtml_function_coverage=1 00:28:06.634 --rc genhtml_legend=1 00:28:06.634 --rc geninfo_all_blocks=1 00:28:06.634 --rc geninfo_unexecuted_blocks=1 00:28:06.634 00:28:06.634 ' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:06.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.634 --rc genhtml_branch_coverage=1 00:28:06.634 --rc genhtml_function_coverage=1 00:28:06.634 --rc genhtml_legend=1 00:28:06.634 --rc geninfo_all_blocks=1 00:28:06.634 --rc geninfo_unexecuted_blocks=1 00:28:06.634 00:28:06.634 ' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:06.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.634 --rc genhtml_branch_coverage=1 00:28:06.634 --rc genhtml_function_coverage=1 00:28:06.634 --rc genhtml_legend=1 00:28:06.634 --rc geninfo_all_blocks=1 00:28:06.634 --rc geninfo_unexecuted_blocks=1 00:28:06.634 00:28:06.634 ' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:06.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.634 --rc genhtml_branch_coverage=1 00:28:06.634 --rc genhtml_function_coverage=1 00:28:06.634 --rc genhtml_legend=1 00:28:06.634 --rc geninfo_all_blocks=1 00:28:06.634 --rc geninfo_unexecuted_blocks=1 00:28:06.634 00:28:06.634 ' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.634 12:05:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:14.778 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.778 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:14.779 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:14.779 Found net devices under 0000:31:00.0: cvl_0_0 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:14.779 Found net devices under 0000:31:00.1: cvl_0_1 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:28:14.779 00:28:14.779 --- 10.0.0.2 ping statistics --- 00:28:14.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.779 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:28:14.779 00:28:14.779 --- 10.0.0.1 ping statistics --- 00:28:14.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.779 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.779 ************************************ 00:28:14.779 START TEST nvmf_digest_clean 00:28:14.779 ************************************ 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=2097569 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 2097569 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2097569 ']' 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.779 12:05:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.779 [2024-10-11 12:05:17.031945] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:14.779 [2024-10-11 12:05:17.032004] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.779 [2024-10-11 12:05:17.121191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.779 [2024-10-11 12:05:17.172460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.779 [2024-10-11 12:05:17.172507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.779 [2024-10-11 12:05:17.172523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.779 [2024-10-11 12:05:17.172530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.779 [2024-10-11 12:05:17.172536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.780 [2024-10-11 12:05:17.173341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.351 12:05:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.351 null0 00:28:15.351 [2024-10-11 12:05:17.987696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.351 [2024-10-11 12:05:18.011987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2097767 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2097767 /var/tmp/bperf.sock 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2097767 ']' 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.351 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.612 [2024-10-11 12:05:18.072977] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:15.612 [2024-10-11 12:05:18.073043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097767 ] 00:28:15.612 [2024-10-11 12:05:18.155814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.612 [2024-10-11 12:05:18.208265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.184 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.184 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:16.184 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:16.184 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:16.184 12:05:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:16.445 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.445 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.017 nvme0n1 00:28:17.017 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:17.017 12:05:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.017 Running I/O for 2 seconds... 00:28:19.347 18760.00 IOPS, 73.28 MiB/s [2024-10-11T10:05:22.050Z] 19355.50 IOPS, 75.61 MiB/s 00:28:19.347 Latency(us) 00:28:19.347 [2024-10-11T10:05:22.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.347 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:19.347 nvme0n1 : 2.04 19003.11 74.23 0.00 0.00 6600.05 3181.23 46749.01 00:28:19.347 [2024-10-11T10:05:22.050Z] =================================================================================================================== 00:28:19.347 [2024-10-11T10:05:22.050Z] Total : 19003.11 74.23 0.00 0.00 6600.05 3181.23 46749.01 00:28:19.347 { 00:28:19.347 "results": [ 00:28:19.347 { 00:28:19.347 "job": "nvme0n1", 00:28:19.347 "core_mask": "0x2", 00:28:19.347 "workload": "randread", 00:28:19.347 "status": "finished", 00:28:19.347 "queue_depth": 128, 00:28:19.347 "io_size": 4096, 00:28:19.347 "runtime": 2.043823, 00:28:19.347 "iops": 19003.11328329312, 00:28:19.347 "mibps": 74.23091126286376, 00:28:19.347 "io_failed": 0, 00:28:19.347 "io_timeout": 0, 00:28:19.347 "avg_latency_us": 6600.054518396457, 00:28:19.347 "min_latency_us": 3181.2266666666665, 00:28:19.347 "max_latency_us": 46749.013333333336 00:28:19.347 } 00:28:19.347 ], 00:28:19.347 "core_count": 1 00:28:19.347 } 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:19.347 | select(.opcode=="crc32c") 00:28:19.347 | "\(.module_name) \(.executed)"' 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2097767 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2097767 ']' 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2097767 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2097767 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2097767' 00:28:19.347 killing process with pid 2097767 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2097767 00:28:19.347 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.347 00:28:19.347 Latency(us) 00:28:19.347 [2024-10-11T10:05:22.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.347 [2024-10-11T10:05:22.050Z] =================================================================================================================== 00:28:19.347 [2024-10-11T10:05:22.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.347 12:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2097767 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2098466 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2098466 /var/tmp/bperf.sock 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2098466 ']' 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.608 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:19.608 [2024-10-11 12:05:22.119330] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:19.608 [2024-10-11 12:05:22.119385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098466 ] 00:28:19.608 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.608 Zero copy mechanism will not be used. 00:28:19.608 [2024-10-11 12:05:22.198160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.608 [2024-10-11 12:05:22.228228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.550 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.550 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:20.550 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:20.550 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:20.550 12:05:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:20.550 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.550 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.121 nvme0n1 00:28:21.121 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:21.121 12:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.121 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.121 Zero copy mechanism will not be used. 00:28:21.121 Running I/O for 2 seconds... 00:28:23.003 7124.00 IOPS, 890.50 MiB/s [2024-10-11T10:05:25.706Z] 7262.50 IOPS, 907.81 MiB/s 00:28:23.003 Latency(us) 00:28:23.003 [2024-10-11T10:05:25.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.003 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:23.003 nvme0n1 : 2.00 7261.66 907.71 0.00 0.00 2200.97 610.99 10158.08 00:28:23.003 [2024-10-11T10:05:25.706Z] =================================================================================================================== 00:28:23.003 [2024-10-11T10:05:25.706Z] Total : 7261.66 907.71 0.00 0.00 2200.97 610.99 10158.08 00:28:23.003 { 00:28:23.003 "results": [ 00:28:23.003 { 00:28:23.003 "job": "nvme0n1", 00:28:23.003 "core_mask": "0x2", 00:28:23.003 "workload": "randread", 00:28:23.003 "status": "finished", 00:28:23.003 "queue_depth": 16, 00:28:23.003 "io_size": 131072, 00:28:23.003 "runtime": 2.002435, 00:28:23.003 "iops": 7261.658930252418, 00:28:23.003 "mibps": 907.7073662815523, 00:28:23.003 "io_failed": 0, 00:28:23.003 "io_timeout": 0, 00:28:23.003 "avg_latency_us": 2200.9657364234463, 00:28:23.003 "min_latency_us": 610.9866666666667, 00:28:23.003 "max_latency_us": 10158.08 00:28:23.003 } 00:28:23.003 ], 00:28:23.003 "core_count": 1 00:28:23.003 } 00:28:23.003 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:23.003 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:23.003 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.003 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.003 | select(.opcode=="crc32c") 00:28:23.003 | "\(.module_name) \(.executed)"' 00:28:23.003 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2098466 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2098466 ']' 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2098466 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2098466 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2098466' 00:28:23.265 killing process with pid 2098466 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2098466 00:28:23.265 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.265 00:28:23.265 Latency(us) 00:28:23.265 [2024-10-11T10:05:25.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.265 [2024-10-11T10:05:25.968Z] =================================================================================================================== 00:28:23.265 [2024-10-11T10:05:25.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.265 12:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2098466 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2099324 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2099324 /var/tmp/bperf.sock 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2099324 ']' 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:23.525 [2024-10-11 12:05:26.053585] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:23.525 [2024-10-11 12:05:26.053640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099324 ] 00:28:23.525 [2024-10-11 12:05:26.128251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.525 [2024-10-11 12:05:26.157728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:23.525 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:23.785 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.785 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.354 nvme0n1 00:28:24.354 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:24.354 12:05:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.354 Running I/O for 2 seconds... 00:28:26.238 30250.00 IOPS, 118.16 MiB/s [2024-10-11T10:05:28.941Z] 30399.50 IOPS, 118.75 MiB/s 00:28:26.238 Latency(us) 00:28:26.238 [2024-10-11T10:05:28.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.238 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.238 nvme0n1 : 2.00 30418.44 118.82 0.00 0.00 4203.57 2129.92 15837.87 00:28:26.238 [2024-10-11T10:05:28.941Z] =================================================================================================================== 00:28:26.238 [2024-10-11T10:05:28.941Z] Total : 30418.44 118.82 0.00 0.00 4203.57 2129.92 15837.87 00:28:26.238 { 00:28:26.238 "results": [ 00:28:26.238 { 00:28:26.238 "job": "nvme0n1", 00:28:26.238 "core_mask": "0x2", 00:28:26.238 "workload": "randwrite", 00:28:26.238 "status": "finished", 00:28:26.238 "queue_depth": 128, 00:28:26.238 "io_size": 4096, 00:28:26.238 "runtime": 2.002963, 00:28:26.238 "iops": 30418.43508841651, 00:28:26.238 "mibps": 118.822012064127, 00:28:26.238 "io_failed": 0, 00:28:26.238 "io_timeout": 0, 00:28:26.238 "avg_latency_us": 4203.566583397618, 00:28:26.238 "min_latency_us": 2129.92, 00:28:26.238 "max_latency_us": 15837.866666666667 00:28:26.238 } 00:28:26.238 ], 00:28:26.238 "core_count": 1 00:28:26.238 } 00:28:26.238 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:26.238 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.238 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.238 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.238 | select(.opcode=="crc32c") 00:28:26.238 | "\(.module_name) \(.executed)"' 00:28:26.238 12:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2099324 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2099324 ']' 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2099324 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2099324 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2099324' 00:28:26.499 killing process with pid 2099324 00:28:26.499 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2099324 00:28:26.499 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.499 00:28:26.499 Latency(us) 00:28:26.499 [2024-10-11T10:05:29.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.499 [2024-10-11T10:05:29.203Z] =================================================================================================================== 00:28:26.500 [2024-10-11T10:05:29.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.500 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2099324 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2099874 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2099874 /var/tmp/bperf.sock 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2099874 ']' 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:26.760 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.761 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:26.761 12:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.761 [2024-10-11 12:05:29.298749] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:26.761 [2024-10-11 12:05:29.298805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099874 ] 00:28:26.761 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.761 Zero copy mechanism will not be used. 00:28:26.761 [2024-10-11 12:05:29.376255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.761 [2024-10-11 12:05:29.405593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.704 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.704 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:27.704 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:27.704 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:27.704 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:27.704 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.704 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.965 nvme0n1 00:28:27.965 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:27.965 12:05:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.225 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.225 Zero copy mechanism will not be used. 00:28:28.225 Running I/O for 2 seconds... 00:28:30.109 7112.00 IOPS, 889.00 MiB/s [2024-10-11T10:05:32.812Z] 7192.50 IOPS, 899.06 MiB/s 00:28:30.109 Latency(us) 00:28:30.109 [2024-10-11T10:05:32.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.109 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:30.109 nvme0n1 : 2.00 7190.15 898.77 0.00 0.00 2221.98 1338.03 10431.15 00:28:30.109 [2024-10-11T10:05:32.812Z] =================================================================================================================== 00:28:30.109 [2024-10-11T10:05:32.812Z] Total : 7190.15 898.77 0.00 0.00 2221.98 1338.03 10431.15 00:28:30.109 { 00:28:30.109 "results": [ 00:28:30.109 { 00:28:30.109 "job": "nvme0n1", 00:28:30.109 "core_mask": "0x2", 00:28:30.109 "workload": "randwrite", 00:28:30.109 "status": "finished", 00:28:30.109 "queue_depth": 16, 00:28:30.109 "io_size": 131072, 00:28:30.109 "runtime": 2.003018, 00:28:30.109 "iops": 7190.150063554097, 00:28:30.109 "mibps": 898.7687579442621, 00:28:30.109 "io_failed": 0, 00:28:30.109 "io_timeout": 0, 00:28:30.109 "avg_latency_us": 2221.983539323242, 00:28:30.109 "min_latency_us": 1338.0266666666666, 00:28:30.109 "max_latency_us": 10431.146666666667 00:28:30.109 } 00:28:30.109 ], 00:28:30.109 "core_count": 1 00:28:30.109 } 00:28:30.109 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:30.109 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:30.109 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:30.109 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:30.109 | select(.opcode=="crc32c") 00:28:30.109 | "\(.module_name) \(.executed)"' 00:28:30.109 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2099874 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2099874 ']' 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2099874 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2099874 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2099874' 00:28:30.370 killing process with pid 2099874 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2099874 00:28:30.370 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.370 00:28:30.370 Latency(us) 00:28:30.370 [2024-10-11T10:05:33.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.370 [2024-10-11T10:05:33.073Z] =================================================================================================================== 00:28:30.370 [2024-10-11T10:05:33.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.370 12:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2099874 00:28:30.370 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2097569 00:28:30.370 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2097569 ']' 00:28:30.370 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2097569 00:28:30.370 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:30.370 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:30.370 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2097569 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2097569' 00:28:30.632 killing process with pid 2097569 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2097569 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2097569 00:28:30.632 00:28:30.632 real 0m16.265s 00:28:30.632 user 0m31.996s 00:28:30.632 sys 0m3.676s 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.632 ************************************ 00:28:30.632 END TEST nvmf_digest_clean 00:28:30.632 ************************************ 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.632 ************************************ 00:28:30.632 START TEST nvmf_digest_error 00:28:30.632 ************************************ 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=2100843 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 2100843 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2100843 ']' 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.632 12:05:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.893 [2024-10-11 12:05:33.372617] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:30.893 [2024-10-11 12:05:33.372677] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.893 [2024-10-11 12:05:33.460335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.893 [2024-10-11 12:05:33.494332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.893 [2024-10-11 12:05:33.494363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.893 [2024-10-11 12:05:33.494369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.893 [2024-10-11 12:05:33.494374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.893 [2024-10-11 12:05:33.494378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.893 [2024-10-11 12:05:33.494906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.463 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:31.463 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:31.463 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:31.463 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:31.463 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.724 [2024-10-11 12:05:34.204856] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.724 null0 00:28:31.724 [2024-10-11 12:05:34.282874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.724 [2024-10-11 12:05:34.307059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2100878 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2100878 /var/tmp/bperf.sock 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2100878 ']' 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.724 12:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.724 [2024-10-11 12:05:34.362550] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:31.724 [2024-10-11 12:05:34.362598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100878 ] 00:28:31.986 [2024-10-11 12:05:34.440521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.986 [2024-10-11 12:05:34.470752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.559 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.559 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:32.559 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:32.559 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:32.819 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:32.819 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.819 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.819 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.819 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.819 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.080 nvme0n1 00:28:33.080 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:33.080 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.080 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.080 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.080 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:33.080 12:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.080 Running I/O for 2 seconds... 00:28:33.080 [2024-10-11 12:05:35.710761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.710792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.710801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.080 [2024-10-11 12:05:35.721326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.721347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.721354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.080 [2024-10-11 12:05:35.731757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.731777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.731784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.080 [2024-10-11 12:05:35.739261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.739280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.739292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.080 [2024-10-11 12:05:35.749222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.749241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.749248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.080 [2024-10-11 12:05:35.758366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.758383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.758390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.080 [2024-10-11 12:05:35.766671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.766689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.766695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.080 [2024-10-11 12:05:35.774641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.774660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.774666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.080 [2024-10-11 12:05:35.783500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.080 [2024-10-11 12:05:35.783518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.080 [2024-10-11 12:05:35.783525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.792765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.792784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.792790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.801906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.801924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.801930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.810604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.810621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.810628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.818940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.818965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.818972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.828440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.828458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.828464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.837416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.837434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.837440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.845725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.845743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.845749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.854391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.854408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.854415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.863816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.863834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.863842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.872295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.872313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.872320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.880487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.880505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.880511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.889593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.889610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.889617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.898540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.898558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.898565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.907717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.907735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.907741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.917125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.917143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.917150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.924850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.924870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.924877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.934164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.934181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.934188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.943181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.943198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.943205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.952145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.952162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.952169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.961238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.961255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.961262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.969698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.969719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.969725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.979191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.979209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.979215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.987564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.987581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.987588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:35.996863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:35.996881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:35.996888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:36.005697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:36.005715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:36.005722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:36.014005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:36.014023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:36.014029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:36.023645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.340 [2024-10-11 12:05:36.023663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.340 [2024-10-11 12:05:36.023670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.340 [2024-10-11 12:05:36.032589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.341 [2024-10-11 12:05:36.032607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.341 [2024-10-11 12:05:36.032613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.341 [2024-10-11 12:05:36.039726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.341 [2024-10-11 12:05:36.039744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.341 [2024-10-11 12:05:36.039751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.049410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.049428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.049434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.059824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.059842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.059849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.068654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.068671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.068678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.076967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.076985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.076991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.086188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.086205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.086211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.094652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.094670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.094677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.103729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.103747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.103753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.112986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.113004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.113015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.121837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.121855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.121865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.131546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.131564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.131570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.140088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.140106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.140112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.148813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.148831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.148838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.158261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.158280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.158286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.167049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.167071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.167077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.174865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.174883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.174889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.184877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.184895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.184901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.194218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.657 [2024-10-11 12:05:36.194236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.657 [2024-10-11 12:05:36.194242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.657 [2024-10-11 12:05:36.203441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.203462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.203469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.212435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.212453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.212460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.221696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.221714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.221720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.230185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.230202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.239813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.239831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.239838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.248454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.248472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.248479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.256840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.256858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.256865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.266009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.266028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.266034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.277020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.277036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.277043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.284990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.285008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.285015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.294157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.294175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.294181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.304047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.304070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.304077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.312215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.312232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.312239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.321055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.321079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.321085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.331754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.331772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.331778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.340023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.340040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.340047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.348339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.348357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.348364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.658 [2024-10-11 12:05:36.357770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.658 [2024-10-11 12:05:36.357789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.658 [2024-10-11 12:05:36.357798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.366238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.366256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.366263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.375716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.375734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.375741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.385717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.385734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.385741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.394385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.394403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.394409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.402636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.402654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.402661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.412377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.412395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.412401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.420581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.420599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.420606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.429395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.429413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.429420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.439524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.439542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.439549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.448577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.448595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.448601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.457270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.457287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.457293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.465025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.465043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.465049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.474361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.474379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.474386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.483851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.483870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.483878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.493136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.493154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.493160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.503303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.503321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.503328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.511460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.511478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.511488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.520450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.520468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.520474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.529372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.529390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.529396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.537809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.537827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.537833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.548846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.548864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.548870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.558024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.558042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.558048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.565614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.565632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.565638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.918 [2024-10-11 12:05:36.575940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.918 [2024-10-11 12:05:36.575957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.918 [2024-10-11 12:05:36.575964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.919 [2024-10-11 12:05:36.585691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.919 [2024-10-11 12:05:36.585709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.919 [2024-10-11 12:05:36.585715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.919 [2024-10-11 12:05:36.594542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.919 [2024-10-11 12:05:36.594563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.919 [2024-10-11 12:05:36.594569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.919 [2024-10-11 12:05:36.605104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.919 [2024-10-11 12:05:36.605122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.919 [2024-10-11 12:05:36.605129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.919 [2024-10-11 12:05:36.613093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:33.919 [2024-10-11 12:05:36.613112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.919 [2024-10-11 12:05:36.613118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.623047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.623069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.623076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.633069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.633087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.633093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.641107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.641125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.641132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.649965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.649983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.649989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.658998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.659015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.659022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.668456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.668474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.668481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.676028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.676045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.676052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.685485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.685503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.685510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.694414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.694432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.694438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 28089.00 IOPS, 109.72 MiB/s [2024-10-11T10:05:36.883Z] [2024-10-11 12:05:36.703578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.703595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.703602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.712634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.712652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.712658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.720916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.180 [2024-10-11 12:05:36.720933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.180 [2024-10-11 12:05:36.720940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.180 [2024-10-11 12:05:36.729799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.729817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.729823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.738619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.738636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.738643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.747823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.747844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.747851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.757679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.757697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.757704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.766587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.766605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.766612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.774615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.774632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.774639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.783815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.783833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.783840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.792429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.792447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.792453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.800463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.800481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.800488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.809740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.809758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.809764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.818113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.818131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.818138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.827743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.827761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.827767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.836611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.836629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.836635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.846632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.846650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.846656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.854541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.854558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.854564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.864423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.864440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.864446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.873526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.873543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.873550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.181 [2024-10-11 12:05:36.882591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.181 [2024-10-11 12:05:36.882607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.181 [2024-10-11 12:05:36.882614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.890619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.890637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.890643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.899414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.899431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.899441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.908877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.908894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.908901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.917486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.917504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.917511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.926631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.926649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.926656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.935577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.935594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.935600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.945159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.945176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.945184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.953286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.953305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.953311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.962429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.962446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.962452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.971466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.971484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.971490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.981258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.981282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.981289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.989143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.989160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.989167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:36.998471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:36.998488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:36.998494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.007317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.007335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.007341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.016558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.016575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.016582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.025415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.025433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.025439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.033619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.033637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.033643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.043015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.043033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.043039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.052305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.052323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.052329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.061057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.061078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.061086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.070650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.070667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.070673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.079168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.079186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.079192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.089993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.090011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.090018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.097615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.097632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.097638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.108720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.108738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.108744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.443 [2024-10-11 12:05:37.118605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.443 [2024-10-11 12:05:37.118622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.443 [2024-10-11 12:05:37.118629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.444 [2024-10-11 12:05:37.127668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.444 [2024-10-11 12:05:37.127685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.444 [2024-10-11 12:05:37.127691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.444 [2024-10-11 12:05:37.136230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.444 [2024-10-11 12:05:37.136248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.444 [2024-10-11 12:05:37.136258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.444 [2024-10-11 12:05:37.145532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.444 [2024-10-11 12:05:37.145550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.444 [2024-10-11 12:05:37.145556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.155158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.155177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.155183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.163106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.163123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.163129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.173183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.173200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.173207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.183961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.183978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.183984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.194374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.194391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.194398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.202605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.202622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.202628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.212110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.212128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.212134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.220992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.221010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.221016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.229152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.229170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.229176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.238238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.238256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.238262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.247878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.247895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.247902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.256601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.256618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.256625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.264660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.264677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.264684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.273518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.273535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.273542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.706 [2024-10-11 12:05:37.282037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.706 [2024-10-11 12:05:37.282055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.706 [2024-10-11 12:05:37.282066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.291668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.291684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.291694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.300287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.300305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.300311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.309240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.309257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.309264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.317738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.317755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.317762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.327447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.327465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.327472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.335259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.335277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.335283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.345039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.345056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.345067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.354008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.354025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.354031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.363605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.363623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.363629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.371664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.371684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.371690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.381581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.381598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.381604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.390187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.390205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.390211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.398795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.398812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.398818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.707 [2024-10-11 12:05:37.407396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.707 [2024-10-11 12:05:37.407413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.707 [2024-10-11 12:05:37.407420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.419555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.419574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.419580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.429434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.429452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.429458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.437841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.437858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.437865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.446838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.446855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.446862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.455727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.455745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.455751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.464050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.464072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.464078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.473356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.473375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.473382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.483120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.483138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.483144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.491942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.491960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.491967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.501465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.501482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.501489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.968 [2024-10-11 12:05:37.510201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.968 [2024-10-11 12:05:37.510219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.968 [2024-10-11 12:05:37.510226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.519841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.519858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.519865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.527692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.527709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.527718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.536576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.536593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.536599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.545675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.545692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.545698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.554542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.554560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.554566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.562204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.562221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.562228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.572198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.572216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.572222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.580685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.580702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.580709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.589172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.589190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.589197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.599264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.599281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.599288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.608232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.608252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.608259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.615780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.615796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.615803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.626142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.626159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.626166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.636169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.636186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.636193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.643222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.643240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.643246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.653372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.653390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.653396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.969 [2024-10-11 12:05:37.664400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:34.969 [2024-10-11 12:05:37.664418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.969 [2024-10-11 12:05:37.664424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.231 [2024-10-11 12:05:37.673303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:35.231 [2024-10-11 12:05:37.673321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.231 [2024-10-11 12:05:37.673328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.231 [2024-10-11 12:05:37.682361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:35.231 [2024-10-11 12:05:37.682379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.231 [2024-10-11 12:05:37.682388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.231 [2024-10-11 12:05:37.691481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:35.231 [2024-10-11 12:05:37.691498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.231 [2024-10-11 12:05:37.691504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.231 28151.50 IOPS, 109.97 MiB/s [2024-10-11T10:05:37.934Z] [2024-10-11 12:05:37.700891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fc5f0) 00:28:35.231 [2024-10-11 12:05:37.700908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.231 [2024-10-11 12:05:37.700914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.231 00:28:35.231 Latency(us) 00:28:35.231 [2024-10-11T10:05:37.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.231 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:35.231 nvme0n1 : 2.00 28164.78 110.02 0.00 0.00 4539.27 2184.53 13707.95 00:28:35.231 [2024-10-11T10:05:37.934Z] =================================================================================================================== 00:28:35.231 [2024-10-11T10:05:37.934Z] Total : 28164.78 110.02 0.00 0.00 4539.27 2184.53 13707.95 00:28:35.231 { 00:28:35.231 "results": [ 00:28:35.231 { 00:28:35.231 "job": "nvme0n1", 00:28:35.231 "core_mask": "0x2", 00:28:35.231 "workload": "randread", 00:28:35.231 "status": "finished", 00:28:35.231 "queue_depth": 128, 00:28:35.231 "io_size": 4096, 00:28:35.231 "runtime": 2.003602, 00:28:35.231 "iops": 28164.775239793133, 00:28:35.231 "mibps": 110.01865328044192, 00:28:35.231 "io_failed": 0, 00:28:35.231 "io_timeout": 0, 00:28:35.231 "avg_latency_us": 4539.274480102544, 00:28:35.231 "min_latency_us": 2184.5333333333333, 00:28:35.231 "max_latency_us": 13707.946666666667 00:28:35.231 } 00:28:35.231 ], 00:28:35.231 "core_count": 1 00:28:35.231 } 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:35.231 | .driver_specific 00:28:35.231 | .nvme_error 00:28:35.231 | .status_code 00:28:35.231 | .command_transient_transport_error' 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2100878 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2100878 ']' 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2100878 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.231 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100878 00:28:35.492 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:35.492 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:35.492 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100878' 00:28:35.492 killing process with pid 2100878 00:28:35.492 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2100878 00:28:35.492 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.492 00:28:35.492 Latency(us) 00:28:35.492 [2024-10-11T10:05:38.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.492 [2024-10-11T10:05:38.195Z] =================================================================================================================== 00:28:35.492 [2024-10-11T10:05:38.195Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.492 12:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2100878 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2101666 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2101666 /var/tmp/bperf.sock 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2101666 ']' 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:35.492 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.493 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:35.493 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.493 [2024-10-11 12:05:38.138563] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:35.493 [2024-10-11 12:05:38.138619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101666 ] 00:28:35.493 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.493 Zero copy mechanism will not be used. 00:28:35.753 [2024-10-11 12:05:38.214662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.753 [2024-10-11 12:05:38.244142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.325 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:36.325 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:36.325 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.325 12:05:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.586 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:36.586 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.586 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.586 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.586 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.586 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.847 nvme0n1 00:28:36.847 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:36.847 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.847 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.847 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.847 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:36.847 12:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:36.847 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.847 Zero copy mechanism will not be used. 00:28:36.847 Running I/O for 2 seconds... 00:28:36.847 [2024-10-11 12:05:39.527822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:36.847 [2024-10-11 12:05:39.527859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.847 [2024-10-11 12:05:39.527868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.847 [2024-10-11 12:05:39.537001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:36.847 [2024-10-11 12:05:39.537024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.847 [2024-10-11 12:05:39.537031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.847 [2024-10-11 12:05:39.548755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:36.847 [2024-10-11 12:05:39.548777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.847 [2024-10-11 12:05:39.548784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.109 [2024-10-11 12:05:39.556481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.109 [2024-10-11 12:05:39.556502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.109 [2024-10-11 12:05:39.556509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.109 [2024-10-11 12:05:39.567783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.109 [2024-10-11 12:05:39.567803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.109 [2024-10-11 12:05:39.567810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.109 [2024-10-11 12:05:39.579687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.109 [2024-10-11 12:05:39.579712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.109 [2024-10-11 12:05:39.579718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.109 [2024-10-11 12:05:39.590098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.109 [2024-10-11 12:05:39.590117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.109 [2024-10-11 12:05:39.590124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.109 [2024-10-11 12:05:39.598930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.109 [2024-10-11 12:05:39.598949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.109 [2024-10-11 12:05:39.598956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.109 [2024-10-11 12:05:39.610165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.109 [2024-10-11 12:05:39.610185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.109 [2024-10-11 12:05:39.610192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.619434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.619453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.619459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.631152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.631170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.631177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.640255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.640277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.640284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.652071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.652091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.652098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.660070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.660089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.660096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.672099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.672120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.672126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.683937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.683957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.683963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.696440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.696459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.696466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.708153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.708172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.708179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.719483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.719503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.719510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.731793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.731812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.731818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.743974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.743994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.744000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.756768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.756789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.756795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.768631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.768651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.768661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.781229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.781249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.781255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.793219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.793239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.793245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.110 [2024-10-11 12:05:39.804894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.110 [2024-10-11 12:05:39.804915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.110 [2024-10-11 12:05:39.804921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.373 [2024-10-11 12:05:39.816436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.373 [2024-10-11 12:05:39.816457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.373 [2024-10-11 12:05:39.816463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.373 [2024-10-11 12:05:39.828289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.373 [2024-10-11 12:05:39.828309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.373 [2024-10-11 12:05:39.828316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.373 [2024-10-11 12:05:39.838499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.373 [2024-10-11 12:05:39.838518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.373 [2024-10-11 12:05:39.838525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.373 [2024-10-11 12:05:39.849577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.373 [2024-10-11 12:05:39.849596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.373 [2024-10-11 12:05:39.849603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.373 [2024-10-11 12:05:39.860578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.373 [2024-10-11 12:05:39.860598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.373 [2024-10-11 12:05:39.860604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.373 [2024-10-11 12:05:39.869342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.869361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.869367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.879180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.879200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.879206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.889575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.889594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.889601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.900171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.900190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.900197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.908677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.908697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.908703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.920613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.920633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.920639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.930842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.930862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.930868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.942468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.942487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.942493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.952208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.952227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.952240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.961363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.961382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.961388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.971387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.971406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.971413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.982205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.982224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.982231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:39.992807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:39.992825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:39.992832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.004480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.004500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.004507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.009152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.009173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.009179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.013996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.014015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.014022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.018478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.018498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.018505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.023053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.023079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.023086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.027869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.027887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.027894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.032716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.032735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.032741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.041812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.041830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.041837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.052364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.052385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.052391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.062053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.062081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.062089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.374 [2024-10-11 12:05:40.072878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.374 [2024-10-11 12:05:40.072899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.374 [2024-10-11 12:05:40.072906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.084615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.084637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.084644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.093598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.093618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.093625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.102706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.102726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.102733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.112285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.112310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.112319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.122631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.122650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.122657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.132550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.132569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.132576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.138988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.139007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.139014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.144365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.144384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.144391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.149870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.149889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.149896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.155853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.155873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.155879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.164958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.164978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.164988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.169866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.169885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.180775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.180795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.180802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.189102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.189122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.189128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.199057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.199081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.199087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.210180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.210199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.210206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.219985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.220004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.220011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.230443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.230463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.230469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.241256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.241276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.241282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.248517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.248540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.248547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.254136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.254155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.254162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.265411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.265431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.265437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.276128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.276147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.638 [2024-10-11 12:05:40.276153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.638 [2024-10-11 12:05:40.286140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.638 [2024-10-11 12:05:40.286159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.639 [2024-10-11 12:05:40.286165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.639 [2024-10-11 12:05:40.296943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.639 [2024-10-11 12:05:40.296962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.639 [2024-10-11 12:05:40.296968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.639 [2024-10-11 12:05:40.307086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.639 [2024-10-11 12:05:40.307106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.639 [2024-10-11 12:05:40.307112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.639 [2024-10-11 12:05:40.317636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.639 [2024-10-11 12:05:40.317656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.639 [2024-10-11 12:05:40.317662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.639 [2024-10-11 12:05:40.326291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.639 [2024-10-11 12:05:40.326310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.639 [2024-10-11 12:05:40.326317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.639 [2024-10-11 12:05:40.335410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.639 [2024-10-11 12:05:40.335429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.639 [2024-10-11 12:05:40.335435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.345913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.345933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.345939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.355329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.355347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.355353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.363385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.363404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.363411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.371571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.371589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.371596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.381521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.381539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.381546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.392936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.392954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.392960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.402509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.402527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.402534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.412525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.412544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.412553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.423323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.423343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.423349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.433153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.433172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.433178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.441921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.441940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.441946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.450773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.450792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.450798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.461955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.461974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.461980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.470309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.470327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.470333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.481382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.481401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.481407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.492107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.492126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.492132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.503018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.503037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.503044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.906 3160.00 IOPS, 395.00 MiB/s [2024-10-11T10:05:40.609Z] [2024-10-11 12:05:40.516118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.516134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.516140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.526349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.526367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.526374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.536020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.536039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.536046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.545229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.545247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.545254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.555627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.555646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.555653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.565648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.565666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.565672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.575584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.575603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.575609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.586162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.586180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.586190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.596436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.596455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.596461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.906 [2024-10-11 12:05:40.603116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:37.906 [2024-10-11 12:05:40.603134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-10-11 12:05:40.603140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.202 [2024-10-11 12:05:40.608857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.608876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.608882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.612099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.612118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.612124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.619149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.619168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.619174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.627213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.627231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.627238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.633054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.633077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.633084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.642414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.642432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.642439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.648288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.648309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.648316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.656712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.656730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.656737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.665748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.665766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.665773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.676002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.676020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.676027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.684213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.684232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.684238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.693035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.693054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.693060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.702790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.702808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.702815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.713435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.713454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.713460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.721953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.721971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.721977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.731695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.731715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.731721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.742669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.742688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.742695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.751385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.751404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.751410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.761343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.761362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.761368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.771093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.771112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.771119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.782451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.782470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.782476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.791403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.791422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.203 [2024-10-11 12:05:40.791428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.203 [2024-10-11 12:05:40.801853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.203 [2024-10-11 12:05:40.801872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.801878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.810656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.810675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.810685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.820843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.820863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.820869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.830714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.830733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.830740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.837455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.837474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.837480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.846806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.846825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.846832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.855912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.855931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.855937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.866644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.866663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.866670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.878000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.878019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.878025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.204 [2024-10-11 12:05:40.887658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.204 [2024-10-11 12:05:40.887677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.204 [2024-10-11 12:05:40.887683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.899338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.899361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.899367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.907742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.907761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.907767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.918184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.918203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.918209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.928916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.928934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.928941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.939392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.939411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.939417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.949743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.949762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.949768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.959381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.959400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.959406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.967513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.967532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.967538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.976597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.976615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.976624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.986240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.986260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.986266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:40.995600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:40.995619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:40.995625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.487 [2024-10-11 12:05:41.006614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.487 [2024-10-11 12:05:41.006633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.487 [2024-10-11 12:05:41.006639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.494 [2024-10-11 12:05:41.016680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.494 [2024-10-11 12:05:41.016699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.494 [2024-10-11 12:05:41.016705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.494 [2024-10-11 12:05:41.028092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.494 [2024-10-11 12:05:41.028111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.494 [2024-10-11 12:05:41.028117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.494 [2024-10-11 12:05:41.039396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.494 [2024-10-11 12:05:41.039415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.494 [2024-10-11 12:05:41.039421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.494 [2024-10-11 12:05:41.050621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.050641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.050647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.055200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.055218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.055225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.063492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.063515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.063521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.072509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.072528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.072534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.083280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.083299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.083305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.090797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.090816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.090822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.098376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.098395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.098401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.105336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.105355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.105361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.114960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.114978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.114987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.121706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.121723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.121730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.129475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.129493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.129499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.139663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.139682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.139688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.148628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.148647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.148654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.156577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.156595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.156601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.162332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.162350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.162356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.495 [2024-10-11 12:05:41.172178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.495 [2024-10-11 12:05:41.172197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.495 [2024-10-11 12:05:41.172203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.179214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.179233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.179239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.188153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.188172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.188178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.196914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.196933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.196939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.205271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.205291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.205300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.212276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.212295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.212301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.218291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.218309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.218316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.223217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.223236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.223243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.227843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.227862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.227868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.232289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.232307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.232313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.238732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.238750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.238757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.246282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.246300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.246306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.253748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.253767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.253773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.260825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.260847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.260853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.267386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.267405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.267411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.274487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.274506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.274513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.280962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.280981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.280987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.287402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.287421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.287427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.296244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.296263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.296269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.302416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.302435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.302441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.307218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.307237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.307243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.314612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.314631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.314637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.320857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.320876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.320882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.330486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.330504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.330510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.336308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.336327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.336333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.342806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.342826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.342832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.791 [2024-10-11 12:05:41.349753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.791 [2024-10-11 12:05:41.349772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.791 [2024-10-11 12:05:41.349779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.356146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.356166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.356174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.362289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.362308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.362315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.366894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.366912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.366918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.372479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.372498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.372508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.380521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.380540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.380546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.384400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.384419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.384425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.387706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.387724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.387730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.398757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.398776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.398782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.410325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.410345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.410351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.422261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.422281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.422287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.433964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.433983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.433990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.439300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.439319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.439326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.444246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.444265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.444271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.449586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.449606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.449612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.456121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.456139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.456145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.464960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.464979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.464985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.472444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.472464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.472472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.477587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.477606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.477612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.484553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.484572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.484578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.792 [2024-10-11 12:05:41.490262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:38.792 [2024-10-11 12:05:41.490281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.792 [2024-10-11 12:05:41.490288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 [2024-10-11 12:05:41.498999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:39.054 [2024-10-11 12:05:41.499018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-10-11 12:05:41.499028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.054 [2024-10-11 12:05:41.506903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:39.054 [2024-10-11 12:05:41.506922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-10-11 12:05:41.506928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.054 [2024-10-11 12:05:41.511117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:39.054 [2024-10-11 12:05:41.511136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-10-11 12:05:41.511143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.054 [2024-10-11 12:05:41.517143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c49d30) 00:28:39.054 [2024-10-11 12:05:41.517162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.054 [2024-10-11 12:05:41.517169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.054 3475.00 IOPS, 434.38 MiB/s 00:28:39.054 Latency(us) 00:28:39.054 [2024-10-11T10:05:41.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.054 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:39.054 nvme0n1 : 2.00 3476.95 434.62 0.00 0.00 4598.62 853.33 13544.11 00:28:39.054 [2024-10-11T10:05:41.757Z] =================================================================================================================== 00:28:39.054 [2024-10-11T10:05:41.757Z] Total : 3476.95 434.62 0.00 0.00 4598.62 853.33 13544.11 00:28:39.054 { 00:28:39.054 "results": [ 00:28:39.054 { 00:28:39.054 "job": "nvme0n1", 00:28:39.054 "core_mask": "0x2", 00:28:39.054 "workload": "randread", 00:28:39.054 "status": "finished", 00:28:39.054 "queue_depth": 16, 00:28:39.054 "io_size": 131072, 00:28:39.054 "runtime": 2.003767, 00:28:39.054 "iops": 3476.9511624854586, 00:28:39.054 "mibps": 434.6188953106823, 00:28:39.054 "io_failed": 0, 00:28:39.054 "io_timeout": 0, 00:28:39.054 "avg_latency_us": 4598.622274532318, 00:28:39.054 "min_latency_us": 853.3333333333334, 00:28:39.054 "max_latency_us": 13544.106666666667 00:28:39.054 } 00:28:39.054 ], 00:28:39.054 "core_count": 1 00:28:39.054 } 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:39.054 | .driver_specific 00:28:39.054 | .nvme_error 00:28:39.054 | .status_code 00:28:39.054 | .command_transient_transport_error' 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 224 > 0 )) 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2101666 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2101666 ']' 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2101666 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.054 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101666 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101666' 00:28:39.316 killing process with pid 2101666 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2101666 00:28:39.316 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.316 00:28:39.316 Latency(us) 00:28:39.316 [2024-10-11T10:05:42.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.316 [2024-10-11T10:05:42.019Z] =================================================================================================================== 00:28:39.316 [2024-10-11T10:05:42.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2101666 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2102477 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2102477 /var/tmp/bperf.sock 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2102477 ']' 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:39.316 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.317 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.317 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.317 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.317 12:05:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.317 [2024-10-11 12:05:41.939014] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:39.317 [2024-10-11 12:05:41.939076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102477 ] 00:28:39.317 [2024-10-11 12:05:42.016189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.578 [2024-10-11 12:05:42.045991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.578 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:39.578 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:39.578 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:39.578 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:39.838 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:39.838 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.838 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.838 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.838 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.838 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.100 nvme0n1 00:28:40.100 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:40.100 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.100 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.100 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.100 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:40.100 12:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.100 Running I/O for 2 seconds... 00:28:40.100 [2024-10-11 12:05:42.689567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f57b0 00:28:40.100 [2024-10-11 12:05:42.690405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.690435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.698422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e4578 00:28:40.100 [2024-10-11 12:05:42.699132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.699152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.706934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ff3c8 00:28:40.100 [2024-10-11 12:05:42.707550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.707567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.715426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f1ca0 00:28:40.100 [2024-10-11 12:05:42.716167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.716184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.723918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e8088 00:28:40.100 [2024-10-11 12:05:42.724672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.724693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.732416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f57b0 00:28:40.100 [2024-10-11 12:05:42.733129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.733146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.740908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e4578 00:28:40.100 [2024-10-11 12:05:42.741656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.741673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.749374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ff3c8 00:28:40.100 [2024-10-11 12:05:42.750119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.750136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.757860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f1ca0 00:28:40.100 [2024-10-11 12:05:42.758609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.758625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.766403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e8088 00:28:40.100 [2024-10-11 12:05:42.767142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.767159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.774890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f57b0 00:28:40.100 [2024-10-11 12:05:42.775639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.775656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.783385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e4578 00:28:40.100 [2024-10-11 12:05:42.784118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.784135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.791851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ff3c8 00:28:40.100 [2024-10-11 12:05:42.792599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.792616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:05:42.800313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f1ca0 00:28:40.100 [2024-10-11 12:05:42.801089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.100 [2024-10-11 12:05:42.801106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:05:42.808785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e8088 00:28:40.362 [2024-10-11 12:05:42.809530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.362 [2024-10-11 12:05:42.809547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:05:42.817247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f57b0 00:28:40.362 [2024-10-11 12:05:42.818001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.362 [2024-10-11 12:05:42.818018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:05:42.825732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e4578 00:28:40.363 [2024-10-11 12:05:42.826495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.826513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.834183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ff3c8 00:28:40.363 [2024-10-11 12:05:42.834931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.834948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.842649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f1ca0 00:28:40.363 [2024-10-11 12:05:42.843397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.843414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.851114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e8088 00:28:40.363 [2024-10-11 12:05:42.851857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.851874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.859577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f57b0 00:28:40.363 [2024-10-11 12:05:42.860315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.860332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.868029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e4578 00:28:40.363 [2024-10-11 12:05:42.868789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.868806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.876530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ff3c8 00:28:40.363 [2024-10-11 12:05:42.877284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.877301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.886129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f1ca0 00:28:40.363 [2024-10-11 12:05:42.887397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.887413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.894321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f4298 00:28:40.363 [2024-10-11 12:05:42.895376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.895392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.902679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fda78 00:28:40.363 [2024-10-11 12:05:42.903732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.903748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.911150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:40.363 [2024-10-11 12:05:42.912188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.912205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.919614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f4298 00:28:40.363 [2024-10-11 12:05:42.920630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.920648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.928228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ff3c8 00:28:40.363 [2024-10-11 12:05:42.929247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.929264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.936739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:40.363 [2024-10-11 12:05:42.937776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.937793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.945236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fc998 00:28:40.363 [2024-10-11 12:05:42.946231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.946250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.953727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f6020 00:28:40.363 [2024-10-11 12:05:42.954752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.954769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.962232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fbcf0 00:28:40.363 [2024-10-11 12:05:42.963277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.963295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.970752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e2c28 00:28:40.363 [2024-10-11 12:05:42.971789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.971806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.979247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f4298 00:28:40.363 [2024-10-11 12:05:42.980286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.980303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.987747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ff3c8 00:28:40.363 [2024-10-11 12:05:42.988787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.988804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:42.996245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:40.363 [2024-10-11 12:05:42.997286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:42.997303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:43.004716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fc998 00:28:40.363 [2024-10-11 12:05:43.005737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:43.005754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:43.013208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f6020 00:28:40.363 [2024-10-11 12:05:43.014242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:43.014260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:43.021733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fbcf0 00:28:40.363 [2024-10-11 12:05:43.022768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:43.022786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:43.030215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e2c28 00:28:40.363 [2024-10-11 12:05:43.031236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:43.031254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:43.038713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f4298 00:28:40.363 [2024-10-11 12:05:43.039753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.363 [2024-10-11 12:05:43.039770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:05:43.047392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ff3c8 00:28:40.364 [2024-10-11 12:05:43.048433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.364 [2024-10-11 12:05:43.048450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.364 [2024-10-11 12:05:43.055876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:40.364 [2024-10-11 12:05:43.056902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.364 [2024-10-11 12:05:43.056919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.364 [2024-10-11 12:05:43.064381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fc998 00:28:40.364 [2024-10-11 12:05:43.065397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.364 [2024-10-11 12:05:43.065413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.625 [2024-10-11 12:05:43.072873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f6020 00:28:40.625 [2024-10-11 12:05:43.073896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.625 [2024-10-11 12:05:43.073913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.625 [2024-10-11 12:05:43.081375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fbcf0 00:28:40.625 [2024-10-11 12:05:43.082396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.625 [2024-10-11 12:05:43.082413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.089862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e2c28 00:28:40.626 [2024-10-11 12:05:43.090891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.090907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.098336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f4298 00:28:40.626 [2024-10-11 12:05:43.099372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.099388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.106585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166dece0 00:28:40.626 [2024-10-11 12:05:43.107512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.107530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.115635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f0788 00:28:40.626 [2024-10-11 12:05:43.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.116774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.124103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fb480 00:28:40.626 [2024-10-11 12:05:43.125178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.125194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.132560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fe2e8 00:28:40.626 [2024-10-11 12:05:43.133679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.133696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.141005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fa3a0 00:28:40.626 [2024-10-11 12:05:43.142115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.142132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.149501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f2510 00:28:40.626 [2024-10-11 12:05:43.150566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.150582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.157965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:40.626 [2024-10-11 12:05:43.159089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.159106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.166733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3d08 00:28:40.626 [2024-10-11 12:05:43.167953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.167973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.173755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f92c0 00:28:40.626 [2024-10-11 12:05:43.174518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.174534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.182165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f0bc0 00:28:40.626 [2024-10-11 12:05:43.182912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.182928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.190611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:40.626 [2024-10-11 12:05:43.191249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.191265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.199070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166efae0 00:28:40.626 [2024-10-11 12:05:43.199775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.199792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.207537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:40.626 [2024-10-11 12:05:43.208260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.208276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.216003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3e60 00:28:40.626 [2024-10-11 12:05:43.216745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.216761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.224459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ebb98 00:28:40.626 [2024-10-11 12:05:43.225202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.225219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.232900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ecc78 00:28:40.626 [2024-10-11 12:05:43.233650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.233667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.241349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fc998 00:28:40.626 [2024-10-11 12:05:43.242091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.242110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.249804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6b70 00:28:40.626 [2024-10-11 12:05:43.250572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.250589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.260578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e0ea0 00:28:40.626 [2024-10-11 12:05:43.262023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.262040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.268069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e0630 00:28:40.626 [2024-10-11 12:05:43.268942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.268958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.276886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:40.626 [2024-10-11 12:05:43.277958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.277974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.285288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f0350 00:28:40.626 [2024-10-11 12:05:43.286360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.286376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.293766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:40.626 [2024-10-11 12:05:43.294862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.294878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.626 [2024-10-11 12:05:43.302229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e8088 00:28:40.626 [2024-10-11 12:05:43.303334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.626 [2024-10-11 12:05:43.303351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.627 [2024-10-11 12:05:43.310678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:40.627 [2024-10-11 12:05:43.311755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.627 [2024-10-11 12:05:43.311772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.627 [2024-10-11 12:05:43.319123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7da8 00:28:40.627 [2024-10-11 12:05:43.320194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.627 [2024-10-11 12:05:43.320211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.627 [2024-10-11 12:05:43.327558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:40.627 [2024-10-11 12:05:43.328659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.627 [2024-10-11 12:05:43.328676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.336021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e4578 00:28:40.889 [2024-10-11 12:05:43.337141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.889 [2024-10-11 12:05:43.337159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.344487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:40.889 [2024-10-11 12:05:43.345580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.889 [2024-10-11 12:05:43.345598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.352951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eb328 00:28:40.889 [2024-10-11 12:05:43.354060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.889 [2024-10-11 12:05:43.354081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.361396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:40.889 [2024-10-11 12:05:43.362491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.889 [2024-10-11 12:05:43.362508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.369837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ed4e8 00:28:40.889 [2024-10-11 12:05:43.370943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.889 [2024-10-11 12:05:43.370960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.378390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:40.889 [2024-10-11 12:05:43.379484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.889 [2024-10-11 12:05:43.379500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.386849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6300 00:28:40.889 [2024-10-11 12:05:43.387964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.889 [2024-10-11 12:05:43.387982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.395324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e23b8 00:28:40.889 [2024-10-11 12:05:43.396397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.889 [2024-10-11 12:05:43.396414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.889 [2024-10-11 12:05:43.403762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ef6a8 00:28:40.890 [2024-10-11 12:05:43.404877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.404894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.412220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166fb480 00:28:40.890 [2024-10-11 12:05:43.413330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.413347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.420672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3e60 00:28:40.890 [2024-10-11 12:05:43.421789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.421805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.429145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f96f8 00:28:40.890 [2024-10-11 12:05:43.430251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.430268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.437608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e2c28 00:28:40.890 [2024-10-11 12:05:43.438712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.438728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.446076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e88f8 00:28:40.890 [2024-10-11 12:05:43.447171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.447187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.454516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7100 00:28:40.890 [2024-10-11 12:05:43.455587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.455605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.462962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f81e0 00:28:40.890 [2024-10-11 12:05:43.464071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.464091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.471412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e4140 00:28:40.890 [2024-10-11 12:05:43.472505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.472522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.479882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5220 00:28:40.890 [2024-10-11 12:05:43.480991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.481008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.488356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f8a50 00:28:40.890 [2024-10-11 12:05:43.489460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.489477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.496812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ebfd0 00:28:40.890 [2024-10-11 12:05:43.497936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.497953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.505256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ed0b0 00:28:40.890 [2024-10-11 12:05:43.506344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.506361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.513703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f8e88 00:28:40.890 [2024-10-11 12:05:43.514823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.514840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.522176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6738 00:28:40.890 [2024-10-11 12:05:43.523274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.523291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.530644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f0bc0 00:28:40.890 [2024-10-11 12:05:43.531713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.531729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.539096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:40.890 [2024-10-11 12:05:43.540195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.540212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.547557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166efae0 00:28:40.890 [2024-10-11 12:05:43.548660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.548677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.555998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:40.890 [2024-10-11 12:05:43.557114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.557130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.564469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f0350 00:28:40.890 [2024-10-11 12:05:43.565557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.565574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.572926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:40.890 [2024-10-11 12:05:43.574026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.574043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.581400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e8088 00:28:40.890 [2024-10-11 12:05:43.582457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.582473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.890 [2024-10-11 12:05:43.589843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:40.890 [2024-10-11 12:05:43.590947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.890 [2024-10-11 12:05:43.590964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.598289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7da8 00:28:41.152 [2024-10-11 12:05:43.599398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.599415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.606747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:41.152 [2024-10-11 12:05:43.607856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.607873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.615234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e4578 00:28:41.152 [2024-10-11 12:05:43.616336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.616353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.623699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:41.152 [2024-10-11 12:05:43.624802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.624819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.632157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eb328 00:28:41.152 [2024-10-11 12:05:43.633268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.633285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.640609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:41.152 [2024-10-11 12:05:43.641706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.641723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.649051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ed4e8 00:28:41.152 [2024-10-11 12:05:43.650126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.650144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.657506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:41.152 [2024-10-11 12:05:43.658612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.658629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.665969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6300 00:28:41.152 [2024-10-11 12:05:43.667060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.667080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.674424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e23b8 00:28:41.152 29770.00 IOPS, 116.29 MiB/s [2024-10-11T10:05:43.855Z] [2024-10-11 12:05:43.675508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.675524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.682893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ef270 00:28:41.152 [2024-10-11 12:05:43.683992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.684012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.152 [2024-10-11 12:05:43.691360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3a28 00:28:41.152 [2024-10-11 12:05:43.692457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.152 [2024-10-11 12:05:43.692474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.699813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e27f0 00:28:41.153 [2024-10-11 12:05:43.700880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.700896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.708280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7538 00:28:41.153 [2024-10-11 12:05:43.709345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.709363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.716735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3d08 00:28:41.153 [2024-10-11 12:05:43.717823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.717839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.725221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee5c8 00:28:41.153 [2024-10-11 12:05:43.726308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.726325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.733664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ecc78 00:28:41.153 [2024-10-11 12:05:43.734758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.734775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.742119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6b70 00:28:41.153 [2024-10-11 12:05:43.743202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.743219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.750579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:41.153 [2024-10-11 12:05:43.751660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.751676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.759054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:41.153 [2024-10-11 12:05:43.760137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.760153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.767535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:41.153 [2024-10-11 12:05:43.768624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.768641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.775998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:41.153 [2024-10-11 12:05:43.777069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.777086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.784444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:41.153 [2024-10-11 12:05:43.785544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.785561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.792911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:41.153 [2024-10-11 12:05:43.793991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.794008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.801365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:41.153 [2024-10-11 12:05:43.802469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.802486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.809841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:41.153 [2024-10-11 12:05:43.810922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.810939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.818288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e23b8 00:28:41.153 [2024-10-11 12:05:43.819390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.819406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.826721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ef270 00:28:41.153 [2024-10-11 12:05:43.827807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.827824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.835170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3a28 00:28:41.153 [2024-10-11 12:05:43.836263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.836280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.843625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e27f0 00:28:41.153 [2024-10-11 12:05:43.844712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.844729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.153 [2024-10-11 12:05:43.852077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7538 00:28:41.153 [2024-10-11 12:05:43.853144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.153 [2024-10-11 12:05:43.853161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.414 [2024-10-11 12:05:43.860536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3d08 00:28:41.414 [2024-10-11 12:05:43.861630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.414 [2024-10-11 12:05:43.861646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.414 [2024-10-11 12:05:43.868968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee5c8 00:28:41.415 [2024-10-11 12:05:43.870066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.870083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.877406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ecc78 00:28:41.415 [2024-10-11 12:05:43.878494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.878511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.885853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6b70 00:28:41.415 [2024-10-11 12:05:43.886953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.886969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.894315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:41.415 [2024-10-11 12:05:43.895358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.895375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.902763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:41.415 [2024-10-11 12:05:43.903865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.903885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.911265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:41.415 [2024-10-11 12:05:43.912362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.912379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.919732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:41.415 [2024-10-11 12:05:43.920829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.920845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.928187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:41.415 [2024-10-11 12:05:43.929270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.929287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.936639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:41.415 [2024-10-11 12:05:43.937711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.937727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.945096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:41.415 [2024-10-11 12:05:43.946185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.946202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.953554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:41.415 [2024-10-11 12:05:43.954656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.954673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.961990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e23b8 00:28:41.415 [2024-10-11 12:05:43.963052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.963072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.970437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ef270 00:28:41.415 [2024-10-11 12:05:43.971487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.971503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.978959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3a28 00:28:41.415 [2024-10-11 12:05:43.980048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.980068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.987419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e27f0 00:28:41.415 [2024-10-11 12:05:43.988500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.988518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:43.995887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7538 00:28:41.415 [2024-10-11 12:05:43.996973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:43.996991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.004333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3d08 00:28:41.415 [2024-10-11 12:05:44.005422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.005439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.012776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee5c8 00:28:41.415 [2024-10-11 12:05:44.013882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.013899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.021229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ecc78 00:28:41.415 [2024-10-11 12:05:44.022329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.022346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.029702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6b70 00:28:41.415 [2024-10-11 12:05:44.030801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.030818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.038193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:41.415 [2024-10-11 12:05:44.039341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.039358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.046843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:41.415 [2024-10-11 12:05:44.047945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.047962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.055305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:41.415 [2024-10-11 12:05:44.056412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.056429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.063761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:41.415 [2024-10-11 12:05:44.064857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.064873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.072229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:41.415 [2024-10-11 12:05:44.073309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.415 [2024-10-11 12:05:44.073325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.415 [2024-10-11 12:05:44.080711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:41.415 [2024-10-11 12:05:44.081803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.416 [2024-10-11 12:05:44.081820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.416 [2024-10-11 12:05:44.089178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:41.416 [2024-10-11 12:05:44.090276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.416 [2024-10-11 12:05:44.090292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.416 [2024-10-11 12:05:44.097628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:41.416 [2024-10-11 12:05:44.098715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.416 [2024-10-11 12:05:44.098732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.416 [2024-10-11 12:05:44.106084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e23b8 00:28:41.416 [2024-10-11 12:05:44.107128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.416 [2024-10-11 12:05:44.107146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.416 [2024-10-11 12:05:44.114541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ef270 00:28:41.416 [2024-10-11 12:05:44.115625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.416 [2024-10-11 12:05:44.115642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.677 [2024-10-11 12:05:44.123021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3a28 00:28:41.677 [2024-10-11 12:05:44.124111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.677 [2024-10-11 12:05:44.124130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.677 [2024-10-11 12:05:44.131498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e27f0 00:28:41.677 [2024-10-11 12:05:44.132586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.677 [2024-10-11 12:05:44.132602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.677 [2024-10-11 12:05:44.139953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7538 00:28:41.677 [2024-10-11 12:05:44.141032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.677 [2024-10-11 12:05:44.141049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.677 [2024-10-11 12:05:44.148409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3d08 00:28:41.677 [2024-10-11 12:05:44.149487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.677 [2024-10-11 12:05:44.149503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.677 [2024-10-11 12:05:44.156848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee5c8 00:28:41.677 [2024-10-11 12:05:44.157945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.677 [2024-10-11 12:05:44.157962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.165312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ecc78 00:28:41.678 [2024-10-11 12:05:44.166405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.166422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.173766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6b70 00:28:41.678 [2024-10-11 12:05:44.174869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.174885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.182259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:41.678 [2024-10-11 12:05:44.183363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.183379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.190711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:41.678 [2024-10-11 12:05:44.191801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.191818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.199160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:41.678 [2024-10-11 12:05:44.200205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.200222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.207601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:41.678 [2024-10-11 12:05:44.208653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.208670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.216057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:41.678 [2024-10-11 12:05:44.217142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.217159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.224521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:41.678 [2024-10-11 12:05:44.225604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.225620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.232977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:41.678 [2024-10-11 12:05:44.234069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.234085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.241414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:41.678 [2024-10-11 12:05:44.242510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.242526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.249852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e23b8 00:28:41.678 [2024-10-11 12:05:44.250945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.250962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.258310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ef270 00:28:41.678 [2024-10-11 12:05:44.259392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.259409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.266790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3a28 00:28:41.678 [2024-10-11 12:05:44.267832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.267848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.275251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e27f0 00:28:41.678 [2024-10-11 12:05:44.276329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.276346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.283704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7538 00:28:41.678 [2024-10-11 12:05:44.284796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.284813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.292142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3d08 00:28:41.678 [2024-10-11 12:05:44.293235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.293252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.300616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee5c8 00:28:41.678 [2024-10-11 12:05:44.301694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.301711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.309085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ecc78 00:28:41.678 [2024-10-11 12:05:44.310038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.310055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.317547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6b70 00:28:41.678 [2024-10-11 12:05:44.318642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.318658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.325999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:41.678 [2024-10-11 12:05:44.327099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.327115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.334453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:41.678 [2024-10-11 12:05:44.335555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.335572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.342915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:41.678 [2024-10-11 12:05:44.344001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.344018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.351381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:41.678 [2024-10-11 12:05:44.352475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.352492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.359849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:41.678 [2024-10-11 12:05:44.360898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.678 [2024-10-11 12:05:44.360915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.678 [2024-10-11 12:05:44.368308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:41.679 [2024-10-11 12:05:44.369407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.679 [2024-10-11 12:05:44.369424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.679 [2024-10-11 12:05:44.376758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:41.679 [2024-10-11 12:05:44.377848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.679 [2024-10-11 12:05:44.377865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.385277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:41.941 [2024-10-11 12:05:44.386331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.386348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.393718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e23b8 00:28:41.941 [2024-10-11 12:05:44.394761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.394777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.402185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ef270 00:28:41.941 [2024-10-11 12:05:44.403264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.403280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.410636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3a28 00:28:41.941 [2024-10-11 12:05:44.411720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.411737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.419091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e27f0 00:28:41.941 [2024-10-11 12:05:44.420179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.420199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.427543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7538 00:28:41.941 [2024-10-11 12:05:44.428625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.428642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.435997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3d08 00:28:41.941 [2024-10-11 12:05:44.437103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.437119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.444463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee5c8 00:28:41.941 [2024-10-11 12:05:44.445548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.445565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.452923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ecc78 00:28:41.941 [2024-10-11 12:05:44.454014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.454031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.461377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6b70 00:28:41.941 [2024-10-11 12:05:44.462459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.462476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.469816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:41.941 [2024-10-11 12:05:44.470910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.470927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.478294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:41.941 [2024-10-11 12:05:44.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.479383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.486748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:41.941 [2024-10-11 12:05:44.487849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.487866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.941 [2024-10-11 12:05:44.495222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:41.941 [2024-10-11 12:05:44.496273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.941 [2024-10-11 12:05:44.496290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.503685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:41.942 [2024-10-11 12:05:44.504738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.504754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.512126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:41.942 [2024-10-11 12:05:44.513218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.513235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.520574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:41.942 [2024-10-11 12:05:44.521670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.521686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.529024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:41.942 [2024-10-11 12:05:44.530124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.530140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.537490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e23b8 00:28:41.942 [2024-10-11 12:05:44.538581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.538597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.545952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ef270 00:28:41.942 [2024-10-11 12:05:44.547023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.547040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.554453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f3a28 00:28:41.942 [2024-10-11 12:05:44.555554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.555570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.562906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e27f0 00:28:41.942 [2024-10-11 12:05:44.564003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.564019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.571366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f7538 00:28:41.942 [2024-10-11 12:05:44.572444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.572460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.579840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3d08 00:28:41.942 [2024-10-11 12:05:44.580965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.580981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.588315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee5c8 00:28:41.942 [2024-10-11 12:05:44.589417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.589434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.596782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ecc78 00:28:41.942 [2024-10-11 12:05:44.597886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.597902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.605228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e6b70 00:28:41.942 [2024-10-11 12:05:44.606284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.606300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.613674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166eea00 00:28:41.942 [2024-10-11 12:05:44.614779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.614795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.622118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166de8a8 00:28:41.942 [2024-10-11 12:05:44.623200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.623216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.630574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e3060 00:28:41.942 [2024-10-11 12:05:44.631659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.631676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.942 [2024-10-11 12:05:44.639046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e95a0 00:28:41.942 [2024-10-11 12:05:44.640127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.942 [2024-10-11 12:05:44.640146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.203 [2024-10-11 12:05:44.647510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ee190 00:28:42.203 [2024-10-11 12:05:44.648612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.203 [2024-10-11 12:05:44.648629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.203 [2024-10-11 12:05:44.655956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166e5658 00:28:42.203 [2024-10-11 12:05:44.657043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.203 [2024-10-11 12:05:44.657060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.203 [2024-10-11 12:05:44.664410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166ec408 00:28:42.203 [2024-10-11 12:05:44.665491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.203 [2024-10-11 12:05:44.665508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.203 [2024-10-11 12:05:44.672856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee3ec0) with pdu=0x2000166f9b30 00:28:42.203 [2024-10-11 12:05:44.673955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.203 [2024-10-11 12:05:44.673972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.203 29993.00 IOPS, 117.16 MiB/s 00:28:42.203 Latency(us) 00:28:42.203 [2024-10-11T10:05:44.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.203 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.203 nvme0n1 : 2.01 30010.47 117.23 0.00 0.00 4259.79 2116.27 16602.45 00:28:42.203 [2024-10-11T10:05:44.906Z] =================================================================================================================== 00:28:42.203 [2024-10-11T10:05:44.906Z] Total : 30010.47 117.23 0.00 0.00 4259.79 2116.27 16602.45 00:28:42.203 { 00:28:42.203 "results": [ 00:28:42.203 { 00:28:42.203 "job": "nvme0n1", 00:28:42.203 "core_mask": "0x2", 00:28:42.203 "workload": "randwrite", 00:28:42.203 "status": "finished", 00:28:42.203 "queue_depth": 128, 00:28:42.203 "io_size": 4096, 00:28:42.203 "runtime": 2.0053, 00:28:42.203 "iops": 30010.472248541366, 00:28:42.203 "mibps": 117.22840722086471, 00:28:42.203 "io_failed": 0, 00:28:42.203 "io_timeout": 0, 00:28:42.203 "avg_latency_us": 4259.793150326797, 00:28:42.203 "min_latency_us": 2116.266666666667, 00:28:42.203 "max_latency_us": 16602.453333333335 00:28:42.203 } 00:28:42.203 ], 00:28:42.203 "core_count": 1 00:28:42.203 } 00:28:42.203 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:42.203 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:42.203 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:42.203 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:42.203 | .driver_specific 00:28:42.203 | .nvme_error 00:28:42.203 | .status_code 00:28:42.203 | .command_transient_transport_error' 00:28:42.203 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 235 > 0 )) 00:28:42.203 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2102477 00:28:42.204 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2102477 ']' 00:28:42.204 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2102477 00:28:42.204 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:42.204 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:42.204 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102477 00:28:42.464 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:42.464 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:42.464 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102477' 00:28:42.464 killing process with pid 2102477 00:28:42.464 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2102477 00:28:42.464 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.464 00:28:42.465 Latency(us) 00:28:42.465 [2024-10-11T10:05:45.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.465 [2024-10-11T10:05:45.168Z] =================================================================================================================== 00:28:42.465 [2024-10-11T10:05:45.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.465 12:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2102477 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2103001 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2103001 /var/tmp/bperf.sock 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2103001 ']' 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:42.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.465 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.465 [2024-10-11 12:05:45.105642] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:42.465 [2024-10-11 12:05:45.105707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103001 ] 00:28:42.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.465 Zero copy mechanism will not be used. 00:28:42.726 [2024-10-11 12:05:45.184974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.726 [2024-10-11 12:05:45.214777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.298 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:43.298 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:43.298 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.298 12:05:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.557 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:43.557 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.557 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.557 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.557 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.557 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.818 nvme0n1 00:28:43.818 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:43.818 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.818 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.818 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.818 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:43.818 12:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.079 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.079 Zero copy mechanism will not be used. 00:28:44.079 Running I/O for 2 seconds... 00:28:44.079 [2024-10-11 12:05:46.569213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.569426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.569454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.579691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.579886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.579905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.584148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.584345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.584365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.587934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.588139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.588157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.594276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.594575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.594594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.598263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.598455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.598472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.601852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.602046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.602068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.605545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.605735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.605752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.609337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.609527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.609545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.613132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.613324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.613343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.616975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.617184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.617203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.620838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.621042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.621068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.624377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.624570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.624587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.630052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.630275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.630294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.639602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.639935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.639953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.645985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.646184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.646202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.654339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.654561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.654580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.663534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.663814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.663833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.672692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.672897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.672915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.678114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.678307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.678326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.686087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.686388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.686409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.693626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.693819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.693837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.702602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.079 [2024-10-11 12:05:46.702903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.079 [2024-10-11 12:05:46.702921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.079 [2024-10-11 12:05:46.711763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.712060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.712082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.080 [2024-10-11 12:05:46.716306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.716499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.716517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.080 [2024-10-11 12:05:46.724043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.724296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.724313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.080 [2024-10-11 12:05:46.734620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.734918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.734937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.080 [2024-10-11 12:05:46.743657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.743849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.743867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.080 [2024-10-11 12:05:46.751253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.751539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.751557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.080 [2024-10-11 12:05:46.760219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.760422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.760440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.080 [2024-10-11 12:05:46.768078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.768273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.768291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.080 [2024-10-11 12:05:46.775073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.080 [2024-10-11 12:05:46.775366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.080 [2024-10-11 12:05:46.775385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.342 [2024-10-11 12:05:46.782831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.342 [2024-10-11 12:05:46.783025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-11 12:05:46.783043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.342 [2024-10-11 12:05:46.790960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.342 [2024-10-11 12:05:46.791243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-11 12:05:46.791268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.342 [2024-10-11 12:05:46.799379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.342 [2024-10-11 12:05:46.799706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-11 12:05:46.799724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.342 [2024-10-11 12:05:46.806159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.342 [2024-10-11 12:05:46.806350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-11 12:05:46.806369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.342 [2024-10-11 12:05:46.814052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.342 [2024-10-11 12:05:46.814352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-11 12:05:46.814370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.342 [2024-10-11 12:05:46.822767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.342 [2024-10-11 12:05:46.822972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-11 12:05:46.822993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.342 [2024-10-11 12:05:46.830257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.342 [2024-10-11 12:05:46.830447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-11 12:05:46.830465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.342 [2024-10-11 12:05:46.837263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.342 [2024-10-11 12:05:46.837458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.342 [2024-10-11 12:05:46.837477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.846145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.846498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.846516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.855258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.855584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.855603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.861067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.861261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.861279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.866872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.867069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.867087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.876025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.876226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.876244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.885604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.885651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.885666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.893507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.893846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.893864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.902793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.903005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.903024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.911297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.911621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.911639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.918539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.918733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.918751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.929047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.929396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.929415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.939109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.939465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.939484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.948745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.949038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.949056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.957397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.957589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.957607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.964221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.964598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.964617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.974520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.974753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.974772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.985969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.986355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.986373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:46.997953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:46.998177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:46.998195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:47.009122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:47.009379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:47.009396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:47.021356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:47.021583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:47.021602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:47.032616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:47.033007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:47.033025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.343 [2024-10-11 12:05:47.044046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.343 [2024-10-11 12:05:47.044424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.343 [2024-10-11 12:05:47.044442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.056008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.056312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.056333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.067655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.067931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.067951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.079583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.079855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.079873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.091123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.091430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.091448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.102420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.102671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.102688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.114125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.114361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.114379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.125562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.125782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.125800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.137195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.137410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.137428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.148143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.148455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.148473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.159870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.160155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.160173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.171158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.171394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.171411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.182327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.182566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.182584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.194246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.194616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.194634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.205727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.206025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.206043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.217559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.217788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.217805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.228903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.229109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.229127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.240299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.240537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.240556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.251572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.251784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.251802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.261312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.261514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.261532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.268685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.268878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.268896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.272736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.606 [2024-10-11 12:05:47.272926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.606 [2024-10-11 12:05:47.272945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.606 [2024-10-11 12:05:47.276767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.607 [2024-10-11 12:05:47.276969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.607 [2024-10-11 12:05:47.276987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.607 [2024-10-11 12:05:47.280907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.607 [2024-10-11 12:05:47.281093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.607 [2024-10-11 12:05:47.281111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.607 [2024-10-11 12:05:47.284415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.607 [2024-10-11 12:05:47.284594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.607 [2024-10-11 12:05:47.284612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.607 [2024-10-11 12:05:47.287867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.607 [2024-10-11 12:05:47.288046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.607 [2024-10-11 12:05:47.288070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.607 [2024-10-11 12:05:47.291269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.607 [2024-10-11 12:05:47.291451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.607 [2024-10-11 12:05:47.291469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.607 [2024-10-11 12:05:47.294806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.607 [2024-10-11 12:05:47.294985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.607 [2024-10-11 12:05:47.295003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.607 [2024-10-11 12:05:47.300524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.607 [2024-10-11 12:05:47.300704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.607 [2024-10-11 12:05:47.300725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.607 [2024-10-11 12:05:47.306120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.607 [2024-10-11 12:05:47.306302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.607 [2024-10-11 12:05:47.306320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.312158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.312339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.312358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.320550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.320818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.320836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.329011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.329193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.329210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.336965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.337026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.337042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.346371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.346432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.346449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.354715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.354987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.355004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.362499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.362549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.362566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.370242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.370314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.370331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.379172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.379238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.379254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.384733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.384790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.384806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.393863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.393913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.393929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.402483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.402768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.402785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.408205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.408291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.408308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.417823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.417886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.417902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.424176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.424225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-10-11 12:05:47.424241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.868 [2024-10-11 12:05:47.431615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.868 [2024-10-11 12:05:47.431915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.431932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.441499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.441550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.441567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.449620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.449684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.449700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.459405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.459604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.459621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.469028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.469081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.469097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.478652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.478714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.478730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.486494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.486556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.486573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.495071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.495214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.495230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.503506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.503793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.503810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.510654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.510713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.510731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.516853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.517079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.517096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.527473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.527699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.527716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.537184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.537469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.537486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.544336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.544416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.544433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.553462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.553527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.553544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.869 [2024-10-11 12:05:47.558144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.558191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.558207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.869 3784.00 IOPS, 473.00 MiB/s [2024-10-11T10:05:47.572Z] [2024-10-11 12:05:47.566638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:44.869 [2024-10-11 12:05:47.566787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-10-11 12:05:47.566804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.577182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.577271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.577288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.583076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.583124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.583140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.591902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.591956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.591973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.599792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.599837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.599853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.606849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.607313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.607331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.614020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.614083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.614099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.620314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.620379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.620395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.627158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.627399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.627416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.632757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.632816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.638803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.638865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.638883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.647082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.647282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.647298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.653282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.653327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.653342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.660529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.660579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.660594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.669521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.669570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.669586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.679000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.679257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.679273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.688307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.688696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.688713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.696942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.697007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.697023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.131 [2024-10-11 12:05:47.704382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.131 [2024-10-11 12:05:47.704428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.131 [2024-10-11 12:05:47.704445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.712164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.712217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.712233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.721246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.721290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.721306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.727314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.727364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.727380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.735069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.735126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.735142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.743649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.743706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.743722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.750135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.750396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.750413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.759667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.759931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.759949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.769327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.769393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.769409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.778167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.778236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.778252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.789109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.789165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.789181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.799569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.799618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.799634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.804784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.804827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.804844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.811272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.811331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.811347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.820159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.820270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.132 [2024-10-11 12:05:47.829183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.132 [2024-10-11 12:05:47.829230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.132 [2024-10-11 12:05:47.829245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.835740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.835795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.835811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.842578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.842630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.842645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.850244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.850295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.850313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.859256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.859503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.859520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.867951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.868219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.868237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.876133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.876188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.876204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.883407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.883454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.883470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.892174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.892255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.892272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.901191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.394 [2024-10-11 12:05:47.901261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.394 [2024-10-11 12:05:47.901277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.394 [2024-10-11 12:05:47.909878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.909930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.909946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.918088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.918146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.918162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.925466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.925762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.925779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.936658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.936925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.936942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.948206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.948499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.948516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.959778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.960015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.960032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.971097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.971388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.971404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.978977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.979026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.979042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.987349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.987411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.987427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.992873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:47.992917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:47.992933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:47.999863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.000130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.000146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.010157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.010224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.010239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.017261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.017309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.017325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.024476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.024521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.024537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.033306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.033360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.033375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.040638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.040714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.040731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.048413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.048481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.048498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.057040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.057321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.057338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.067889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.068116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.068135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.078758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.078911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.395 [2024-10-11 12:05:48.089340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.395 [2024-10-11 12:05:48.089621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.395 [2024-10-11 12:05:48.089641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.658 [2024-10-11 12:05:48.099167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.099378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.099395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.109073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.109177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.109194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.118551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.118796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.118814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.126565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.126649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.126666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.133742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.133973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.133991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.143883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.144168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.144186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.154519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.154683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.154701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.164792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.165069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.165086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.175507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.175695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.175711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.185260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.185554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.185571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.195701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.195987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.196004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.206252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.206533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.206550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.217114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.217346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.217362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.227838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.228094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.228110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.238106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.238384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.238401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.248719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.248964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.248983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.259221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.259459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.259475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.269676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.269771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.269788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.279499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.279774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.279791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.290272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.290582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.290599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.300956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.301206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.301222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.311212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.311476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.311494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.321611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.321920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.659 [2024-10-11 12:05:48.321938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.659 [2024-10-11 12:05:48.332311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.659 [2024-10-11 12:05:48.332537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.660 [2024-10-11 12:05:48.332553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.660 [2024-10-11 12:05:48.342199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.660 [2024-10-11 12:05:48.342475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.660 [2024-10-11 12:05:48.342492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.660 [2024-10-11 12:05:48.352370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.660 [2024-10-11 12:05:48.352632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.660 [2024-10-11 12:05:48.352648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.362198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.362465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.362481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.373479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.373691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.373707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.383685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.383892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.383909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.393541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.393783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.393801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.403842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.404034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.404050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.414709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.414974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.414992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.425469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.425725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.425742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.435886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.436155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.436172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.446969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.922 [2024-10-11 12:05:48.447126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.922 [2024-10-11 12:05:48.447143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.922 [2024-10-11 12:05:48.457575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.457783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.457800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.467714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.467983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.468000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.478044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.478341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.478359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.486505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.486772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.486788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.496915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.497196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.497214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.506885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.507179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.507197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.517245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.517505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.517529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.527956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.528189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.528206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.538363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.538613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.538631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.548478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.548734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.548751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.923 [2024-10-11 12:05:48.559474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ee4200) with pdu=0x2000166fef90 00:28:45.923 [2024-10-11 12:05:48.559704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.923 [2024-10-11 12:05:48.559721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.923 3599.00 IOPS, 449.88 MiB/s 00:28:45.923 Latency(us) 00:28:45.923 [2024-10-11T10:05:48.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.923 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:45.923 nvme0n1 : 2.01 3597.19 449.65 0.00 0.00 4440.05 1597.44 12069.55 00:28:45.923 [2024-10-11T10:05:48.626Z] =================================================================================================================== 00:28:45.923 [2024-10-11T10:05:48.626Z] Total : 3597.19 449.65 0.00 0.00 4440.05 1597.44 12069.55 00:28:45.923 { 00:28:45.923 "results": [ 00:28:45.923 { 00:28:45.923 "job": "nvme0n1", 00:28:45.923 "core_mask": "0x2", 00:28:45.923 "workload": "randwrite", 00:28:45.923 "status": "finished", 00:28:45.923 "queue_depth": 16, 00:28:45.923 "io_size": 131072, 00:28:45.923 "runtime": 2.006289, 00:28:45.923 "iops": 3597.188640320512, 00:28:45.923 "mibps": 449.648580040064, 00:28:45.923 "io_failed": 0, 00:28:45.923 "io_timeout": 0, 00:28:45.923 "avg_latency_us": 4440.0457438455505, 00:28:45.923 "min_latency_us": 1597.44, 00:28:45.923 "max_latency_us": 12069.546666666667 00:28:45.923 } 00:28:45.923 ], 00:28:45.923 "core_count": 1 00:28:45.923 } 00:28:45.923 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:45.923 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:45.923 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:45.923 | .driver_specific 00:28:45.923 | .nvme_error 00:28:45.923 | .status_code 00:28:45.923 | .command_transient_transport_error' 00:28:45.923 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2103001 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2103001 ']' 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2103001 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2103001 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2103001' 00:28:46.184 killing process with pid 2103001 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2103001 00:28:46.184 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.184 00:28:46.184 Latency(us) 00:28:46.184 [2024-10-11T10:05:48.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.184 [2024-10-11T10:05:48.887Z] =================================================================================================================== 00:28:46.184 [2024-10-11T10:05:48.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.184 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2103001 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2100843 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2100843 ']' 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2100843 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100843 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100843' 00:28:46.446 killing process with pid 2100843 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2100843 00:28:46.446 12:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2100843 00:28:46.446 00:28:46.446 real 0m15.783s 00:28:46.446 user 0m31.174s 00:28:46.446 sys 0m3.532s 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.446 ************************************ 00:28:46.446 END TEST nvmf_digest_error 00:28:46.446 ************************************ 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.446 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.446 rmmod nvme_tcp 00:28:46.707 rmmod nvme_fabrics 00:28:46.707 rmmod nvme_keyring 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 2100843 ']' 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 2100843 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2100843 ']' 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2100843 00:28:46.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2100843) - No such process 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2100843 is not found' 00:28:46.707 Process with pid 2100843 is not found 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.707 12:05:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.621 12:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.621 00:28:48.621 real 0m42.282s 00:28:48.621 user 1m5.407s 00:28:48.621 sys 0m13.136s 00:28:48.621 12:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.621 12:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.621 ************************************ 00:28:48.621 END TEST nvmf_digest 00:28:48.621 ************************************ 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.882 ************************************ 00:28:48.882 START TEST nvmf_bdevperf 00:28:48.882 ************************************ 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:48.882 * Looking for test storage... 00:28:48.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.882 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:48.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.882 --rc genhtml_branch_coverage=1 00:28:48.882 --rc genhtml_function_coverage=1 00:28:48.882 --rc genhtml_legend=1 00:28:48.882 --rc geninfo_all_blocks=1 00:28:48.882 --rc geninfo_unexecuted_blocks=1 00:28:48.882 00:28:48.882 ' 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:48.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.883 --rc genhtml_branch_coverage=1 00:28:48.883 --rc genhtml_function_coverage=1 00:28:48.883 --rc genhtml_legend=1 00:28:48.883 --rc geninfo_all_blocks=1 00:28:48.883 --rc geninfo_unexecuted_blocks=1 00:28:48.883 00:28:48.883 ' 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:48.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.883 --rc genhtml_branch_coverage=1 00:28:48.883 --rc genhtml_function_coverage=1 00:28:48.883 --rc genhtml_legend=1 00:28:48.883 --rc geninfo_all_blocks=1 00:28:48.883 --rc geninfo_unexecuted_blocks=1 00:28:48.883 00:28:48.883 ' 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:48.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.883 --rc genhtml_branch_coverage=1 00:28:48.883 --rc genhtml_function_coverage=1 00:28:48.883 --rc genhtml_legend=1 00:28:48.883 --rc geninfo_all_blocks=1 00:28:48.883 --rc geninfo_unexecuted_blocks=1 00:28:48.883 00:28:48.883 ' 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.883 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.144 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.145 12:05:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.293 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:57.294 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:57.294 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:57.294 Found net devices under 0000:31:00.0: cvl_0_0 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:57.294 Found net devices under 0000:31:00.1: cvl_0_1 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.294 12:05:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:28:57.294 00:28:57.294 --- 10.0.0.2 ping statistics --- 00:28:57.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.294 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:28:57.294 00:28:57.294 --- 10.0.0.1 ping statistics --- 00:28:57.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.294 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2108061 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2108061 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2108061 ']' 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:57.294 12:05:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.294 [2024-10-11 12:05:59.408673] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:57.294 [2024-10-11 12:05:59.408739] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.294 [2024-10-11 12:05:59.498360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:57.294 [2024-10-11 12:05:59.551025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.294 [2024-10-11 12:05:59.551085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.295 [2024-10-11 12:05:59.551095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.295 [2024-10-11 12:05:59.551102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.295 [2024-10-11 12:05:59.551109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.295 [2024-10-11 12:05:59.552956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.295 [2024-10-11 12:05:59.553128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.295 [2024-10-11 12:05:59.553154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.556 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:57.556 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:57.556 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:57.556 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:57.556 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.816 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.816 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.816 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.816 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.817 [2024-10-11 12:06:00.284144] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.817 Malloc0 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.817 [2024-10-11 12:06:00.367157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.817 { 00:28:57.817 "params": { 00:28:57.817 "name": "Nvme$subsystem", 00:28:57.817 "trtype": "$TEST_TRANSPORT", 00:28:57.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.817 "adrfam": "ipv4", 00:28:57.817 "trsvcid": "$NVMF_PORT", 00:28:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.817 "hdgst": ${hdgst:-false}, 00:28:57.817 "ddgst": ${ddgst:-false} 00:28:57.817 }, 00:28:57.817 "method": "bdev_nvme_attach_controller" 00:28:57.817 } 00:28:57.817 EOF 00:28:57.817 )") 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:57.817 12:06:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:57.817 "params": { 00:28:57.817 "name": "Nvme1", 00:28:57.817 "trtype": "tcp", 00:28:57.817 "traddr": "10.0.0.2", 00:28:57.817 "adrfam": "ipv4", 00:28:57.817 "trsvcid": "4420", 00:28:57.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.817 "hdgst": false, 00:28:57.817 "ddgst": false 00:28:57.817 }, 00:28:57.817 "method": "bdev_nvme_attach_controller" 00:28:57.817 }' 00:28:57.817 [2024-10-11 12:06:00.427388] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:57.817 [2024-10-11 12:06:00.427472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108381 ] 00:28:57.817 [2024-10-11 12:06:00.512482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.078 [2024-10-11 12:06:00.565754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.338 Running I/O for 1 seconds... 00:28:59.280 8688.00 IOPS, 33.94 MiB/s 00:28:59.280 Latency(us) 00:28:59.280 [2024-10-11T10:06:01.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.280 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.280 Verification LBA range: start 0x0 length 0x4000 00:28:59.280 Nvme1n1 : 1.01 8720.57 34.06 0.00 0.00 14617.17 2498.56 14199.47 00:28:59.280 [2024-10-11T10:06:01.983Z] =================================================================================================================== 00:28:59.280 [2024-10-11T10:06:01.983Z] Total : 8720.57 34.06 0.00 0.00 14617.17 2498.56 14199.47 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2108807 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:59.541 { 00:28:59.541 "params": { 00:28:59.541 "name": "Nvme$subsystem", 00:28:59.541 "trtype": "$TEST_TRANSPORT", 00:28:59.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.541 "adrfam": "ipv4", 00:28:59.541 "trsvcid": "$NVMF_PORT", 00:28:59.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.541 "hdgst": ${hdgst:-false}, 00:28:59.541 "ddgst": ${ddgst:-false} 00:28:59.541 }, 00:28:59.541 "method": "bdev_nvme_attach_controller" 00:28:59.541 } 00:28:59.541 EOF 00:28:59.541 )") 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:59.541 12:06:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:59.541 "params": { 00:28:59.541 "name": "Nvme1", 00:28:59.541 "trtype": "tcp", 00:28:59.541 "traddr": "10.0.0.2", 00:28:59.541 "adrfam": "ipv4", 00:28:59.541 "trsvcid": "4420", 00:28:59.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.542 "hdgst": false, 00:28:59.542 "ddgst": false 00:28:59.542 }, 00:28:59.542 "method": "bdev_nvme_attach_controller" 00:28:59.542 }' 00:28:59.542 [2024-10-11 12:06:02.081423] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:59.542 [2024-10-11 12:06:02.081481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108807 ] 00:28:59.542 [2024-10-11 12:06:02.161286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.542 [2024-10-11 12:06:02.196304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.801 Running I/O for 15 seconds... 00:29:02.131 9536.00 IOPS, 37.25 MiB/s [2024-10-11T10:06:05.097Z] 10360.50 IOPS, 40.47 MiB/s [2024-10-11T10:06:05.097Z] 12:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2108061 00:29:02.394 12:06:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:02.394 [2024-10-11 12:06:05.044279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.044987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.044996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.395 [2024-10-11 12:06:05.045229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.395 [2024-10-11 12:06:05.045237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.396 [2024-10-11 12:06:05.045722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.396 [2024-10-11 12:06:05.045954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.396 [2024-10-11 12:06:05.045961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.045971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.045978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.045988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.045995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.397 [2024-10-11 12:06:05.046677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.397 [2024-10-11 12:06:05.046684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.398 [2024-10-11 12:06:05.046693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.398 [2024-10-11 12:06:05.046703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.398 [2024-10-11 12:06:05.046712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.398 [2024-10-11 12:06:05.046719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.398 [2024-10-11 12:06:05.046729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.398 [2024-10-11 12:06:05.046736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.398 [2024-10-11 12:06:05.046745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.398 [2024-10-11 12:06:05.046753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.398 [2024-10-11 12:06:05.046762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.398 [2024-10-11 12:06:05.046770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.398 [2024-10-11 12:06:05.046779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d31b0 is same with the state(6) to be set 00:29:02.398 [2024-10-11 12:06:05.046789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:02.398 [2024-10-11 12:06:05.046796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:02.398 [2024-10-11 12:06:05.046803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87608 len:8 PRP1 0x0 PRP2 0x0 00:29:02.398 [2024-10-11 12:06:05.046811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.398 [2024-10-11 12:06:05.046855] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15d31b0 was disconnected and freed. reset controller. 00:29:02.398 [2024-10-11 12:06:05.050408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.398 [2024-10-11 12:06:05.050459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.398 [2024-10-11 12:06:05.051378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-10-11 12:06:05.051416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.398 [2024-10-11 12:06:05.051428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.398 [2024-10-11 12:06:05.051669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.398 [2024-10-11 12:06:05.051895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.398 [2024-10-11 12:06:05.051906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.398 [2024-10-11 12:06:05.051915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.398 [2024-10-11 12:06:05.055475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.398 [2024-10-11 12:06:05.064495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.398 [2024-10-11 12:06:05.065037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-10-11 12:06:05.065056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.398 [2024-10-11 12:06:05.065078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.398 [2024-10-11 12:06:05.065299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.398 [2024-10-11 12:06:05.065519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.398 [2024-10-11 12:06:05.065528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.398 [2024-10-11 12:06:05.065535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.398 [2024-10-11 12:06:05.069088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.398 [2024-10-11 12:06:05.078294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.398 [2024-10-11 12:06:05.078929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-10-11 12:06:05.078970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.398 [2024-10-11 12:06:05.078981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.398 [2024-10-11 12:06:05.079232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.398 [2024-10-11 12:06:05.079457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.398 [2024-10-11 12:06:05.079466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.398 [2024-10-11 12:06:05.079474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.398 [2024-10-11 12:06:05.083026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.398 [2024-10-11 12:06:05.092259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.398 [2024-10-11 12:06:05.092882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.398 [2024-10-11 12:06:05.092924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.398 [2024-10-11 12:06:05.092936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.398 [2024-10-11 12:06:05.093186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.398 [2024-10-11 12:06:05.093412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.398 [2024-10-11 12:06:05.093422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.398 [2024-10-11 12:06:05.093430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.660 [2024-10-11 12:06:05.096985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.660 [2024-10-11 12:06:05.106210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.660 [2024-10-11 12:06:05.106749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.660 [2024-10-11 12:06:05.106769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.106778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.106999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.107229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.107244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.107252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.110803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.120020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.120636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.120682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.120694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.120936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.121172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.121183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.121191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.124749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.133974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.134533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.134555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.134564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.134784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.135005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.135015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.135023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.138584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.147802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.148450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.148498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.148510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.148755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.148980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.148991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.148999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.152582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.161680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.162425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.162477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.162489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.162737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.162964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.162974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.162983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.166565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.175582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.176131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.176168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.176178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.176411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.176635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.176645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.176654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.180226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.189477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.190078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.190103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.190112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.190333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.190555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.190568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.190576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.194144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.203391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.203978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.204001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.204009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.204248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.204472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.204483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.204491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.208053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.217302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.217856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.217877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.217886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.218114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.218337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.218348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.218356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.221918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.231167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.231739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.231760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.231768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.231988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.232219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.232231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.232240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.661 [2024-10-11 12:06:05.235800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.661 [2024-10-11 12:06:05.245047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.661 [2024-10-11 12:06:05.245608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.661 [2024-10-11 12:06:05.245632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.661 [2024-10-11 12:06:05.245640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.661 [2024-10-11 12:06:05.245861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.661 [2024-10-11 12:06:05.246093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.661 [2024-10-11 12:06:05.246106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.661 [2024-10-11 12:06:05.246125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.249697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.662 [2024-10-11 12:06:05.258954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.662 [2024-10-11 12:06:05.259571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.662 [2024-10-11 12:06:05.259596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.662 [2024-10-11 12:06:05.259605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.662 [2024-10-11 12:06:05.259826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.662 [2024-10-11 12:06:05.260049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.662 [2024-10-11 12:06:05.260071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.662 [2024-10-11 12:06:05.260079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.263667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.662 [2024-10-11 12:06:05.272914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.662 [2024-10-11 12:06:05.273520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.662 [2024-10-11 12:06:05.273545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.662 [2024-10-11 12:06:05.273554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.662 [2024-10-11 12:06:05.273776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.662 [2024-10-11 12:06:05.273998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.662 [2024-10-11 12:06:05.274011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.662 [2024-10-11 12:06:05.274019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.277589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.662 [2024-10-11 12:06:05.286833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.662 [2024-10-11 12:06:05.287402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.662 [2024-10-11 12:06:05.287426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.662 [2024-10-11 12:06:05.287435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.662 [2024-10-11 12:06:05.287657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.662 [2024-10-11 12:06:05.287879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.662 [2024-10-11 12:06:05.287892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.662 [2024-10-11 12:06:05.287900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.291471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.662 [2024-10-11 12:06:05.300695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.662 [2024-10-11 12:06:05.301272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.662 [2024-10-11 12:06:05.301296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.662 [2024-10-11 12:06:05.301305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.662 [2024-10-11 12:06:05.301528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.662 [2024-10-11 12:06:05.301751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.662 [2024-10-11 12:06:05.301763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.662 [2024-10-11 12:06:05.301771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.305350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.662 [2024-10-11 12:06:05.314594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.662 [2024-10-11 12:06:05.315222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.662 [2024-10-11 12:06:05.315287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.662 [2024-10-11 12:06:05.315300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.662 [2024-10-11 12:06:05.315557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.662 [2024-10-11 12:06:05.315786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.662 [2024-10-11 12:06:05.315798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.662 [2024-10-11 12:06:05.315808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.319402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.662 [2024-10-11 12:06:05.328445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.662 [2024-10-11 12:06:05.329155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.662 [2024-10-11 12:06:05.329221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.662 [2024-10-11 12:06:05.329235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.662 [2024-10-11 12:06:05.329491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.662 [2024-10-11 12:06:05.329720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.662 [2024-10-11 12:06:05.329731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.662 [2024-10-11 12:06:05.329740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.333324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.662 [2024-10-11 12:06:05.342331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.662 [2024-10-11 12:06:05.343007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.662 [2024-10-11 12:06:05.343083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.662 [2024-10-11 12:06:05.343098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.662 [2024-10-11 12:06:05.343362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.662 [2024-10-11 12:06:05.343592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.662 [2024-10-11 12:06:05.343603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.662 [2024-10-11 12:06:05.343612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.347184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.662 [2024-10-11 12:06:05.356270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.662 [2024-10-11 12:06:05.356951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.662 [2024-10-11 12:06:05.357016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.662 [2024-10-11 12:06:05.357030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.662 [2024-10-11 12:06:05.357302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.662 [2024-10-11 12:06:05.357533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.662 [2024-10-11 12:06:05.357544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.662 [2024-10-11 12:06:05.357553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.662 [2024-10-11 12:06:05.361134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.924 [2024-10-11 12:06:05.370172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.924 [2024-10-11 12:06:05.370882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.924 [2024-10-11 12:06:05.370949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.924 [2024-10-11 12:06:05.370963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.924 [2024-10-11 12:06:05.371233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.924 [2024-10-11 12:06:05.371464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.924 [2024-10-11 12:06:05.371476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.924 [2024-10-11 12:06:05.371485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.924 [2024-10-11 12:06:05.375053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.924 [2024-10-11 12:06:05.384058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.924 [2024-10-11 12:06:05.384778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.924 [2024-10-11 12:06:05.384844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.924 [2024-10-11 12:06:05.384857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.924 [2024-10-11 12:06:05.385139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.924 [2024-10-11 12:06:05.385369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.924 [2024-10-11 12:06:05.385381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.924 [2024-10-11 12:06:05.385397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.924 [2024-10-11 12:06:05.388967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.924 [2024-10-11 12:06:05.397973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.924 [2024-10-11 12:06:05.398695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.924 [2024-10-11 12:06:05.398762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.924 [2024-10-11 12:06:05.398775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.924 [2024-10-11 12:06:05.399031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.924 [2024-10-11 12:06:05.399274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.924 [2024-10-11 12:06:05.399287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.924 [2024-10-11 12:06:05.399296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.924 [2024-10-11 12:06:05.402865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.924 [2024-10-11 12:06:05.411869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.924 [2024-10-11 12:06:05.412520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.924 [2024-10-11 12:06:05.412548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.924 [2024-10-11 12:06:05.412557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.924 [2024-10-11 12:06:05.412782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.924 [2024-10-11 12:06:05.413006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.924 [2024-10-11 12:06:05.413018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.924 [2024-10-11 12:06:05.413027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.924 [2024-10-11 12:06:05.416593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.425799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.426406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.426431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.426440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.426662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.426885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.426898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.426907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.430472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.439674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.440334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.440407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.440421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.440677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.440906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.440918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.440927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.444507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.453511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.454187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.454253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.454266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.454523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.454751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.454763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.454773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.458357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.467382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.468078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.468143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.468157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.468413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.468642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.468656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.468665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 9069.33 IOPS, 35.43 MiB/s [2024-10-11T10:06:05.628Z] [2024-10-11 12:06:05.473908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.481250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.481850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.481917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.481930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.482201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.482439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.482451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.482459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.486030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.495057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.495770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.495836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.495850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.496119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.496350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.496361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.496370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.499937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.508946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.509621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.509686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.509699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.509955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.510197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.510210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.510220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.513789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.522793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.523455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.523520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.523533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.523790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.524019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.524031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.524040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.527627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.536636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.537222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.537252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.537263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.537487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.537710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.537722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.537731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.541295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.550500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.551098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.551124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.551133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.551356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.925 [2024-10-11 12:06:05.551579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.925 [2024-10-11 12:06:05.551589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.925 [2024-10-11 12:06:05.551600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.925 [2024-10-11 12:06:05.555163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.925 [2024-10-11 12:06:05.564377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.925 [2024-10-11 12:06:05.565095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.925 [2024-10-11 12:06:05.565161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.925 [2024-10-11 12:06:05.565175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.925 [2024-10-11 12:06:05.565430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.926 [2024-10-11 12:06:05.565659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.926 [2024-10-11 12:06:05.565671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.926 [2024-10-11 12:06:05.565680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.926 [2024-10-11 12:06:05.569261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.926 [2024-10-11 12:06:05.578267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.926 [2024-10-11 12:06:05.578950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.926 [2024-10-11 12:06:05.579015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.926 [2024-10-11 12:06:05.579037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.926 [2024-10-11 12:06:05.579307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.926 [2024-10-11 12:06:05.579537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.926 [2024-10-11 12:06:05.579549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.926 [2024-10-11 12:06:05.579558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.926 [2024-10-11 12:06:05.583123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.926 [2024-10-11 12:06:05.592142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.926 [2024-10-11 12:06:05.592817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.926 [2024-10-11 12:06:05.592882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.926 [2024-10-11 12:06:05.592895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.926 [2024-10-11 12:06:05.593164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.926 [2024-10-11 12:06:05.593394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.926 [2024-10-11 12:06:05.593406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.926 [2024-10-11 12:06:05.593416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.926 [2024-10-11 12:06:05.596985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.926 [2024-10-11 12:06:05.605986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.926 [2024-10-11 12:06:05.606652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.926 [2024-10-11 12:06:05.606719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.926 [2024-10-11 12:06:05.606732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.926 [2024-10-11 12:06:05.606988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.926 [2024-10-11 12:06:05.607232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.926 [2024-10-11 12:06:05.607244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.926 [2024-10-11 12:06:05.607253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.926 [2024-10-11 12:06:05.610828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.926 [2024-10-11 12:06:05.619837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.926 [2024-10-11 12:06:05.620530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.926 [2024-10-11 12:06:05.620596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:02.926 [2024-10-11 12:06:05.620609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:02.926 [2024-10-11 12:06:05.620866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:02.926 [2024-10-11 12:06:05.621108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.926 [2024-10-11 12:06:05.621129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.926 [2024-10-11 12:06:05.621138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.926 [2024-10-11 12:06:05.624709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.188 [2024-10-11 12:06:05.633725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.188 [2024-10-11 12:06:05.634441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.188 [2024-10-11 12:06:05.634507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.188 [2024-10-11 12:06:05.634520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.188 [2024-10-11 12:06:05.634776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.188 [2024-10-11 12:06:05.635005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.188 [2024-10-11 12:06:05.635017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.188 [2024-10-11 12:06:05.635026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.188 [2024-10-11 12:06:05.638608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.188 [2024-10-11 12:06:05.647611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.188 [2024-10-11 12:06:05.648289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.188 [2024-10-11 12:06:05.648354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.188 [2024-10-11 12:06:05.648367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.188 [2024-10-11 12:06:05.648623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.188 [2024-10-11 12:06:05.648852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.188 [2024-10-11 12:06:05.648864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.188 [2024-10-11 12:06:05.648873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.188 [2024-10-11 12:06:05.652455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.188 [2024-10-11 12:06:05.661461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.188 [2024-10-11 12:06:05.662164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.188 [2024-10-11 12:06:05.662230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.188 [2024-10-11 12:06:05.662243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.188 [2024-10-11 12:06:05.662500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.188 [2024-10-11 12:06:05.662730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.188 [2024-10-11 12:06:05.662742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.188 [2024-10-11 12:06:05.662751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.188 [2024-10-11 12:06:05.666356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.188 [2024-10-11 12:06:05.675387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.188 [2024-10-11 12:06:05.675973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.188 [2024-10-11 12:06:05.676005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.188 [2024-10-11 12:06:05.676015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.188 [2024-10-11 12:06:05.676258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.188 [2024-10-11 12:06:05.676486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.188 [2024-10-11 12:06:05.676499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.188 [2024-10-11 12:06:05.676508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.188 [2024-10-11 12:06:05.680070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.188 [2024-10-11 12:06:05.689293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.188 [2024-10-11 12:06:05.689858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.188 [2024-10-11 12:06:05.689883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.188 [2024-10-11 12:06:05.689892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.188 [2024-10-11 12:06:05.690123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.188 [2024-10-11 12:06:05.690348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.188 [2024-10-11 12:06:05.690360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.188 [2024-10-11 12:06:05.690368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.188 [2024-10-11 12:06:05.693924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.188 [2024-10-11 12:06:05.703125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.188 [2024-10-11 12:06:05.703691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.188 [2024-10-11 12:06:05.703716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.188 [2024-10-11 12:06:05.703725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.188 [2024-10-11 12:06:05.703946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.188 [2024-10-11 12:06:05.704178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.188 [2024-10-11 12:06:05.704189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.188 [2024-10-11 12:06:05.704197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.188 [2024-10-11 12:06:05.707748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.188 [2024-10-11 12:06:05.716959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.188 [2024-10-11 12:06:05.717569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.188 [2024-10-11 12:06:05.717596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.188 [2024-10-11 12:06:05.717611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.188 [2024-10-11 12:06:05.717835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.718061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.718086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.718094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.721656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.730898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.731453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.731518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.731531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.731787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.732017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.732029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.732038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.735622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.744758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.745396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.745427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.745437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.745661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.745885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.745898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.745906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.749484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.758714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.759158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.759185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.759195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.759421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.759644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.759656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.759672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.763250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.772719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.773437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.773504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.773517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.773773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.774003] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.774015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.774024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.777628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.786684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.787371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.787436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.787449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.787706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.787936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.787947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.787957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.791542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.800565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.801166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.801197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.801207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.801433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.801656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.801668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.801679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.805251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.814487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.815107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.815133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.815142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.815364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.815587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.815603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.815614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.819186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.828402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.828959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.828982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.828991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.829220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.829443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.829457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.829465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.833018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.842239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.842886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.842952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.842965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.843235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.843465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.843477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.843486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.847058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.856084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.856805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.856870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.189 [2024-10-11 12:06:05.856883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.189 [2024-10-11 12:06:05.857161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.189 [2024-10-11 12:06:05.857391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.189 [2024-10-11 12:06:05.857403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.189 [2024-10-11 12:06:05.857412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.189 [2024-10-11 12:06:05.860987] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.189 [2024-10-11 12:06:05.870024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.189 [2024-10-11 12:06:05.870638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.189 [2024-10-11 12:06:05.870705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.190 [2024-10-11 12:06:05.870718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.190 [2024-10-11 12:06:05.870974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.190 [2024-10-11 12:06:05.871218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.190 [2024-10-11 12:06:05.871231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.190 [2024-10-11 12:06:05.871240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.190 [2024-10-11 12:06:05.874820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.190 [2024-10-11 12:06:05.883858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.190 [2024-10-11 12:06:05.884577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.190 [2024-10-11 12:06:05.884643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.190 [2024-10-11 12:06:05.884657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.190 [2024-10-11 12:06:05.884914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.190 [2024-10-11 12:06:05.885171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.190 [2024-10-11 12:06:05.885184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.190 [2024-10-11 12:06:05.885194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.190 [2024-10-11 12:06:05.888774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.451 [2024-10-11 12:06:05.897801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.451 [2024-10-11 12:06:05.898440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.451 [2024-10-11 12:06:05.898468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.451 [2024-10-11 12:06:05.898479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.451 [2024-10-11 12:06:05.898704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.451 [2024-10-11 12:06:05.898927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.451 [2024-10-11 12:06:05.898940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.451 [2024-10-11 12:06:05.898958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.451 [2024-10-11 12:06:05.902528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.451 [2024-10-11 12:06:05.911755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.451 [2024-10-11 12:06:05.912374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.451 [2024-10-11 12:06:05.912400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.451 [2024-10-11 12:06:05.912410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.451 [2024-10-11 12:06:05.912633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.451 [2024-10-11 12:06:05.912856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.451 [2024-10-11 12:06:05.912871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.451 [2024-10-11 12:06:05.912881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.451 [2024-10-11 12:06:05.916452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.451 [2024-10-11 12:06:05.925676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.451 [2024-10-11 12:06:05.926244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.451 [2024-10-11 12:06:05.926267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.451 [2024-10-11 12:06:05.926276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.451 [2024-10-11 12:06:05.926498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.451 [2024-10-11 12:06:05.926720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.451 [2024-10-11 12:06:05.926732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.451 [2024-10-11 12:06:05.926740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.451 [2024-10-11 12:06:05.930369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.451 [2024-10-11 12:06:05.939602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.451 [2024-10-11 12:06:05.940193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.451 [2024-10-11 12:06:05.940260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.451 [2024-10-11 12:06:05.940275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.451 [2024-10-11 12:06:05.940532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:05.940761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:05.940772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:05.940781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:05.944373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:05.953596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:05.954372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:05.954451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:05.954465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:05.954722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:05.954951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:05.954963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:05.954972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:05.958562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:05.967594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:05.968195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:05.968262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:05.968277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:05.968535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:05.968764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:05.968776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:05.968784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:05.972374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:05.981604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:05.982177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:05.982243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:05.982257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:05.982513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:05.982741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:05.982753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:05.982762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:05.986376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:05.995742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:05.996434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:05.996501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:05.996515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:05.996771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:05.997008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:05.997020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:05.997029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:06.000616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:06.009634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:06.010392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:06.010458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:06.010471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:06.010727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:06.010957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:06.010969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:06.010977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:06.014562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:06.023581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:06.024200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:06.024266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:06.024279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:06.024535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:06.024765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:06.024776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:06.024785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:06.028374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:06.037390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:06.038017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:06.038047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:06.038057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:06.038292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:06.038516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:06.038529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:06.038537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:06.042111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:06.051346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:06.052032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:06.052109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:06.052124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:06.052380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:06.052611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:06.052624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:06.052635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:06.056223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:06.065234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:06.065915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:06.065981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:06.065995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:06.066263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:06.066508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:06.066521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:06.066530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:06.070103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:06.079255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:06.079980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.452 [2024-10-11 12:06:06.080046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.452 [2024-10-11 12:06:06.080059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.452 [2024-10-11 12:06:06.080332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.452 [2024-10-11 12:06:06.080561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.452 [2024-10-11 12:06:06.080573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.452 [2024-10-11 12:06:06.080582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.452 [2024-10-11 12:06:06.084158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.452 [2024-10-11 12:06:06.093206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.452 [2024-10-11 12:06:06.093887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.453 [2024-10-11 12:06:06.093952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.453 [2024-10-11 12:06:06.093973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.453 [2024-10-11 12:06:06.094244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.453 [2024-10-11 12:06:06.094474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.453 [2024-10-11 12:06:06.094486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.453 [2024-10-11 12:06:06.094495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.453 [2024-10-11 12:06:06.098071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.453 [2024-10-11 12:06:06.107087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.453 [2024-10-11 12:06:06.107809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.453 [2024-10-11 12:06:06.107875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.453 [2024-10-11 12:06:06.107888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.453 [2024-10-11 12:06:06.108158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.453 [2024-10-11 12:06:06.108388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.453 [2024-10-11 12:06:06.108400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.453 [2024-10-11 12:06:06.108409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.453 [2024-10-11 12:06:06.111979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.453 [2024-10-11 12:06:06.120997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.453 [2024-10-11 12:06:06.121692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.453 [2024-10-11 12:06:06.121758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.453 [2024-10-11 12:06:06.121772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.453 [2024-10-11 12:06:06.122028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.453 [2024-10-11 12:06:06.122274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.453 [2024-10-11 12:06:06.122286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.453 [2024-10-11 12:06:06.122296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.453 [2024-10-11 12:06:06.125873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.453 [2024-10-11 12:06:06.134887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.453 [2024-10-11 12:06:06.135502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.453 [2024-10-11 12:06:06.135564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.453 [2024-10-11 12:06:06.135577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.453 [2024-10-11 12:06:06.135833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.453 [2024-10-11 12:06:06.136077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.453 [2024-10-11 12:06:06.136096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.453 [2024-10-11 12:06:06.136105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.453 [2024-10-11 12:06:06.139683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.453 [2024-10-11 12:06:06.148724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.453 [2024-10-11 12:06:06.149308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.453 [2024-10-11 12:06:06.149337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.453 [2024-10-11 12:06:06.149347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.453 [2024-10-11 12:06:06.149572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.453 [2024-10-11 12:06:06.149796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.453 [2024-10-11 12:06:06.149809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.453 [2024-10-11 12:06:06.149817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.453 [2024-10-11 12:06:06.153393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.162616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.163148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.163173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.163183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.163406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.163630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.163642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.163650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.167233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.176452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.177007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.177082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.177097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.177353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.177582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.177595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.177604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.181187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.190444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.191127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.191193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.191208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.191465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.191695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.191706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.191715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.195311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.204329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.205022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.205098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.205113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.205369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.205597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.205609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.205618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.209202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.218220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.218885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.218952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.218965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.219235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.219466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.219477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.219486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.223053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.232083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.232692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.232758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.232771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.233035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.233279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.233292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.233301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.236880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.245907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.246591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.246656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.246670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.246926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.247167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.247180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.247190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.250766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.259784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.260441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.260506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.260520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.260775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.261004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.261015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.261024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.264613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.273663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.274275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.274341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.274354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.274610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.274840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.715 [2024-10-11 12:06:06.274851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.715 [2024-10-11 12:06:06.274868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.715 [2024-10-11 12:06:06.278459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.715 [2024-10-11 12:06:06.287507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.715 [2024-10-11 12:06:06.288179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.715 [2024-10-11 12:06:06.288210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.715 [2024-10-11 12:06:06.288219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.715 [2024-10-11 12:06:06.288443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.715 [2024-10-11 12:06:06.288667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.288679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.288687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.292257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.301489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.302059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.302132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.302146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.302402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.302631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.302643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.302652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.306145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.314211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.314774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.314799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.314806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.314962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.315127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.315138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.315145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.317598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.326939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.327487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.327506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.327512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.327666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.327820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.327828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.327834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.330289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.339625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.340173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.340222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.340232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.340412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.340570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.340578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.340584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.343042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.352244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.352729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.352749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.352755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.352909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.353068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.353076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.353082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.355528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.364998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.365505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.365522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.365528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.365685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.365839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.365847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.365853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.368308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.377642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.378133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.378161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.378168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.378330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.378487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.378494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.378500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.380945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.390282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.390776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.390792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.390799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.390951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.391110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.391118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.391125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.393557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.402898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.403475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.403511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.403521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.403692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.403847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.403854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.403864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.716 [2024-10-11 12:06:06.406327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.716 [2024-10-11 12:06:06.415648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.716 [2024-10-11 12:06:06.416152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.716 [2024-10-11 12:06:06.416169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.716 [2024-10-11 12:06:06.416176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.716 [2024-10-11 12:06:06.416328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.716 [2024-10-11 12:06:06.416480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.716 [2024-10-11 12:06:06.416487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.716 [2024-10-11 12:06:06.416493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.985 [2024-10-11 12:06:06.418930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.985 [2024-10-11 12:06:06.428398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.985 [2024-10-11 12:06:06.428855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.985 [2024-10-11 12:06:06.428869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.985 [2024-10-11 12:06:06.428875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.985 [2024-10-11 12:06:06.429026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.985 [2024-10-11 12:06:06.429183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.429190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.429196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.431629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.441090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.441540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.441553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.441558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.441709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.441861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.441868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.441874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.444309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.453769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.454227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.454244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.454249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.454401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.454552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.454559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.454564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.456997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.466460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.466908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.466921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.466926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.467081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.467232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.467239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.467244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.469682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 6802.00 IOPS, 26.57 MiB/s [2024-10-11T10:06:06.689Z] [2024-10-11 12:06:06.479138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.479633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.479646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.479652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.479803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.479955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.479961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.479966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.482401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.491868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.492355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.492368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.492373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.492523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.492678] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.492685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.492690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.495124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.504584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.505067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.505080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.505086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.505236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.505387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.505394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.505399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.507831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.517292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.517873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.517905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.517914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.518087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.518242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.518249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.518255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.520690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.530009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.530503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.530518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.530524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.530676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.530828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.530836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.530842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.533281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.542739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.543197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.543211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.543216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.543367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.543519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.543526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.543531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.545963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.555416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.555902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.986 [2024-10-11 12:06:06.555915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.986 [2024-10-11 12:06:06.555921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.986 [2024-10-11 12:06:06.556076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.986 [2024-10-11 12:06:06.556228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.986 [2024-10-11 12:06:06.556235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.986 [2024-10-11 12:06:06.556241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.986 [2024-10-11 12:06:06.558671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.986 [2024-10-11 12:06:06.568140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.986 [2024-10-11 12:06:06.568577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.568589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.568594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.568745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.568896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.568903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.568908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.571342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.580796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.581322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.581355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.581366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.581533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.581688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.581695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.581701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.584144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.593470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.594029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.594061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.594076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.594244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.594399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.594406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.594412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.596847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.606164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.606620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.606636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.606642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.606793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.606945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.606952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.606957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.609396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.618850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.619407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.619439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.619448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.619614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.619769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.619782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.619788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.622232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.631552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.632117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.632149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.632159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.632328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.632483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.632490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.632496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.634936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.644258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.644712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.644727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.644733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.644883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.645035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.645042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.645048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.647484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.656937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.657486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.657499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.657505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.657656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.657807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.657814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.657819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.660251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.669572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.670034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.670047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.670052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.670207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.670360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.670366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.670371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.672801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.987 [2024-10-11 12:06:06.682254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.987 [2024-10-11 12:06:06.682752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.987 [2024-10-11 12:06:06.682784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:03.987 [2024-10-11 12:06:06.682793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:03.987 [2024-10-11 12:06:06.682960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:03.987 [2024-10-11 12:06:06.683121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.987 [2024-10-11 12:06:06.683129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.987 [2024-10-11 12:06:06.683134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.987 [2024-10-11 12:06:06.685570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.250 [2024-10-11 12:06:06.694893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.250 [2024-10-11 12:06:06.695175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.250 [2024-10-11 12:06:06.695192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.250 [2024-10-11 12:06:06.695198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.250 [2024-10-11 12:06:06.695350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.250 [2024-10-11 12:06:06.695501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.250 [2024-10-11 12:06:06.695508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.250 [2024-10-11 12:06:06.695513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.250 [2024-10-11 12:06:06.697947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.250 [2024-10-11 12:06:06.707545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.250 [2024-10-11 12:06:06.708140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.250 [2024-10-11 12:06:06.708172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.250 [2024-10-11 12:06:06.708181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.250 [2024-10-11 12:06:06.708353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.250 [2024-10-11 12:06:06.708508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.250 [2024-10-11 12:06:06.708515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.250 [2024-10-11 12:06:06.708521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.250 [2024-10-11 12:06:06.710961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.250 [2024-10-11 12:06:06.720283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.250 [2024-10-11 12:06:06.720874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.250 [2024-10-11 12:06:06.720906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.250 [2024-10-11 12:06:06.720915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.250 [2024-10-11 12:06:06.721088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.250 [2024-10-11 12:06:06.721242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.250 [2024-10-11 12:06:06.721249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.250 [2024-10-11 12:06:06.721255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.250 [2024-10-11 12:06:06.723690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.250 [2024-10-11 12:06:06.733001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.250 [2024-10-11 12:06:06.733610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.250 [2024-10-11 12:06:06.733641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.250 [2024-10-11 12:06:06.733650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.250 [2024-10-11 12:06:06.733817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.250 [2024-10-11 12:06:06.733971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.250 [2024-10-11 12:06:06.733978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.250 [2024-10-11 12:06:06.733984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.250 [2024-10-11 12:06:06.736427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.250 [2024-10-11 12:06:06.745742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.250 [2024-10-11 12:06:06.746355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.250 [2024-10-11 12:06:06.746387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.250 [2024-10-11 12:06:06.746396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.250 [2024-10-11 12:06:06.746565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.250 [2024-10-11 12:06:06.746719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.250 [2024-10-11 12:06:06.746726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.250 [2024-10-11 12:06:06.746736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.250 [2024-10-11 12:06:06.749179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.250 [2024-10-11 12:06:06.758352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.250 [2024-10-11 12:06:06.758805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.250 [2024-10-11 12:06:06.758820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.250 [2024-10-11 12:06:06.758825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.250 [2024-10-11 12:06:06.758976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.250 [2024-10-11 12:06:06.759133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.250 [2024-10-11 12:06:06.759146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.250 [2024-10-11 12:06:06.759152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.250 [2024-10-11 12:06:06.761668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.250 [2024-10-11 12:06:06.770992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.250 [2024-10-11 12:06:06.771609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.250 [2024-10-11 12:06:06.771641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.250 [2024-10-11 12:06:06.771650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.250 [2024-10-11 12:06:06.771817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.250 [2024-10-11 12:06:06.771972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.250 [2024-10-11 12:06:06.771979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.250 [2024-10-11 12:06:06.771985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.774428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.783603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.784192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.784224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.784233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.784401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.784556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.784562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.784568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.787009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.796336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.796778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.796810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.796819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.796987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.797147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.797154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.797160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.799597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.809050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.809397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.809414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.809419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.809572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.809723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.809730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.809735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.812183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.821786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.822138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.822152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.822157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.822308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.822459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.822466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.822471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.824901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.834498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.834980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.834993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.834999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.835160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.835313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.835320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.835325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.837755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.847212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.847624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.847637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.847643] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.847794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.847947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.847954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.847959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.850393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.859849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.860305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.860318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.860324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.860476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.860628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.860635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.860640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.863070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.872532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.872987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.873001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.873007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.873163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.873316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.873324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.873332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.875763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.885216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.885769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.885801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.885810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.885980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.886140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.886149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.886155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.888619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.897946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.898429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.898446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.898452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.251 [2024-10-11 12:06:06.898604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.251 [2024-10-11 12:06:06.898757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.251 [2024-10-11 12:06:06.898764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.251 [2024-10-11 12:06:06.898769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.251 [2024-10-11 12:06:06.901205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.251 [2024-10-11 12:06:06.910661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.251 [2024-10-11 12:06:06.911300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.251 [2024-10-11 12:06:06.911332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.251 [2024-10-11 12:06:06.911342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.252 [2024-10-11 12:06:06.911509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.252 [2024-10-11 12:06:06.911664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.252 [2024-10-11 12:06:06.911672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.252 [2024-10-11 12:06:06.911678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.252 [2024-10-11 12:06:06.914120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.252 [2024-10-11 12:06:06.923301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.252 [2024-10-11 12:06:06.923772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.252 [2024-10-11 12:06:06.923791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.252 [2024-10-11 12:06:06.923798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.252 [2024-10-11 12:06:06.923950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.252 [2024-10-11 12:06:06.924109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.252 [2024-10-11 12:06:06.924116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.252 [2024-10-11 12:06:06.924122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.252 [2024-10-11 12:06:06.926557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.252 [2024-10-11 12:06:06.936018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.252 [2024-10-11 12:06:06.936563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.252 [2024-10-11 12:06:06.936595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.252 [2024-10-11 12:06:06.936604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.252 [2024-10-11 12:06:06.936772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.252 [2024-10-11 12:06:06.936927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.252 [2024-10-11 12:06:06.936935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.252 [2024-10-11 12:06:06.936941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.252 [2024-10-11 12:06:06.939385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.252 [2024-10-11 12:06:06.948708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.252 [2024-10-11 12:06:06.949320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.252 [2024-10-11 12:06:06.949353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.252 [2024-10-11 12:06:06.949362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.252 [2024-10-11 12:06:06.949529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.252 [2024-10-11 12:06:06.949684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.252 [2024-10-11 12:06:06.949692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.252 [2024-10-11 12:06:06.949697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.252 [2024-10-11 12:06:06.952142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.514 [2024-10-11 12:06:06.961321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.514 [2024-10-11 12:06:06.961875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.514 [2024-10-11 12:06:06.961907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.514 [2024-10-11 12:06:06.961917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.514 [2024-10-11 12:06:06.962090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.514 [2024-10-11 12:06:06.962250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.514 [2024-10-11 12:06:06.962258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.514 [2024-10-11 12:06:06.962263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.514 [2024-10-11 12:06:06.964701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.514 [2024-10-11 12:06:06.974033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.514 [2024-10-11 12:06:06.974600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.514 [2024-10-11 12:06:06.974632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.514 [2024-10-11 12:06:06.974642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.514 [2024-10-11 12:06:06.974809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.514 [2024-10-11 12:06:06.974965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.514 [2024-10-11 12:06:06.974972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.514 [2024-10-11 12:06:06.974978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.514 [2024-10-11 12:06:06.977420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.514 [2024-10-11 12:06:06.986743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.514 [2024-10-11 12:06:06.987391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.514 [2024-10-11 12:06:06.987423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.514 [2024-10-11 12:06:06.987432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.514 [2024-10-11 12:06:06.987600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.514 [2024-10-11 12:06:06.987755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.514 [2024-10-11 12:06:06.987763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.514 [2024-10-11 12:06:06.987769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.514 [2024-10-11 12:06:06.990224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.514 [2024-10-11 12:06:06.999402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.514 [2024-10-11 12:06:06.999869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.514 [2024-10-11 12:06:06.999885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.514 [2024-10-11 12:06:06.999892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.514 [2024-10-11 12:06:07.000044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.514 [2024-10-11 12:06:07.000202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.514 [2024-10-11 12:06:07.000209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.514 [2024-10-11 12:06:07.000215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.514 [2024-10-11 12:06:07.002652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.514 [2024-10-11 12:06:07.012110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.514 [2024-10-11 12:06:07.012587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.514 [2024-10-11 12:06:07.012619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.514 [2024-10-11 12:06:07.012629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.514 [2024-10-11 12:06:07.012796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.514 [2024-10-11 12:06:07.012952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.514 [2024-10-11 12:06:07.012959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.514 [2024-10-11 12:06:07.012965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.514 [2024-10-11 12:06:07.015408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.514 [2024-10-11 12:06:07.024763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.514 [2024-10-11 12:06:07.025396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.514 [2024-10-11 12:06:07.025429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.514 [2024-10-11 12:06:07.025438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.514 [2024-10-11 12:06:07.025606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.514 [2024-10-11 12:06:07.025761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.514 [2024-10-11 12:06:07.025769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.514 [2024-10-11 12:06:07.025775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.514 [2024-10-11 12:06:07.028219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.514 [2024-10-11 12:06:07.037392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.514 [2024-10-11 12:06:07.038000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.514 [2024-10-11 12:06:07.038032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.038041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.038218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.038374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.038382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.038388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.040827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.050018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.050592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.050624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.050636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.050804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.050959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.050967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.050972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.053414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.062727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.063286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.063319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.063328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.063495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.063650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.063657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.063664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.066107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.075428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.075931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.075946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.075952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.076109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.076262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.076269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.076274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.078704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.088044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.088542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.088556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.088561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.088712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.088864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.088875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.088881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.091322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.100774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.101356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.101388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.101397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.101564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.101718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.101725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.101731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.104175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.113422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.114005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.114036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.114045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.114218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.114374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.114381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.114387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.116822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.126146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.126697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.126729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.126738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.126905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.127059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.127073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.127078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.129515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.138836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.139345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.139360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.139366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.139518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.139669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.139675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.139681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.142113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.151464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.152078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.152109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.152118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.152286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.152441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.152448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.152454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.515 [2024-10-11 12:06:07.154894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.515 [2024-10-11 12:06:07.164215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.515 [2024-10-11 12:06:07.164708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.515 [2024-10-11 12:06:07.164740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.515 [2024-10-11 12:06:07.164749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.515 [2024-10-11 12:06:07.164915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.515 [2024-10-11 12:06:07.165077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.515 [2024-10-11 12:06:07.165084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.515 [2024-10-11 12:06:07.165089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.516 [2024-10-11 12:06:07.167528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.516 [2024-10-11 12:06:07.176846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.516 [2024-10-11 12:06:07.177510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.516 [2024-10-11 12:06:07.177542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.516 [2024-10-11 12:06:07.177551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.516 [2024-10-11 12:06:07.177721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.516 [2024-10-11 12:06:07.177875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.516 [2024-10-11 12:06:07.177882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.516 [2024-10-11 12:06:07.177889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.516 [2024-10-11 12:06:07.180330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.516 [2024-10-11 12:06:07.189497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.516 [2024-10-11 12:06:07.190089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.516 [2024-10-11 12:06:07.190121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.516 [2024-10-11 12:06:07.190130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.516 [2024-10-11 12:06:07.190300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.516 [2024-10-11 12:06:07.190455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.516 [2024-10-11 12:06:07.190462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.516 [2024-10-11 12:06:07.190467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.516 [2024-10-11 12:06:07.192917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.516 [2024-10-11 12:06:07.202231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.516 [2024-10-11 12:06:07.202824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.516 [2024-10-11 12:06:07.202856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.516 [2024-10-11 12:06:07.202865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.516 [2024-10-11 12:06:07.203032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.516 [2024-10-11 12:06:07.203195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.516 [2024-10-11 12:06:07.203203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.516 [2024-10-11 12:06:07.203208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.516 [2024-10-11 12:06:07.205645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.516 [2024-10-11 12:06:07.214957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.516 [2024-10-11 12:06:07.215526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.516 [2024-10-11 12:06:07.215558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.516 [2024-10-11 12:06:07.215566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.516 [2024-10-11 12:06:07.215736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.516 [2024-10-11 12:06:07.215890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.516 [2024-10-11 12:06:07.215897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.516 [2024-10-11 12:06:07.215907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.792 [2024-10-11 12:06:07.218352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.792 [2024-10-11 12:06:07.227667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.792 [2024-10-11 12:06:07.228190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.792 [2024-10-11 12:06:07.228222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.792 [2024-10-11 12:06:07.228231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.792 [2024-10-11 12:06:07.228400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.792 [2024-10-11 12:06:07.228555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.792 [2024-10-11 12:06:07.228562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.792 [2024-10-11 12:06:07.228567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.792 [2024-10-11 12:06:07.231014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.792 [2024-10-11 12:06:07.240332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.792 [2024-10-11 12:06:07.240928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.792 [2024-10-11 12:06:07.240959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.792 [2024-10-11 12:06:07.240968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.792 [2024-10-11 12:06:07.241143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.792 [2024-10-11 12:06:07.241298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.792 [2024-10-11 12:06:07.241305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.792 [2024-10-11 12:06:07.241310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.792 [2024-10-11 12:06:07.243746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.792 [2024-10-11 12:06:07.253057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.792 [2024-10-11 12:06:07.253631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.792 [2024-10-11 12:06:07.253662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.792 [2024-10-11 12:06:07.253671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.792 [2024-10-11 12:06:07.253838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.792 [2024-10-11 12:06:07.253992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.792 [2024-10-11 12:06:07.253999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.792 [2024-10-11 12:06:07.254005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.792 [2024-10-11 12:06:07.256450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.792 [2024-10-11 12:06:07.265762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.792 [2024-10-11 12:06:07.266386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.792 [2024-10-11 12:06:07.266418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.792 [2024-10-11 12:06:07.266427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.792 [2024-10-11 12:06:07.266594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.792 [2024-10-11 12:06:07.266748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.792 [2024-10-11 12:06:07.266755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.792 [2024-10-11 12:06:07.266761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.792 [2024-10-11 12:06:07.269205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.792 [2024-10-11 12:06:07.278387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.792 [2024-10-11 12:06:07.278965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.792 [2024-10-11 12:06:07.278997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.792 [2024-10-11 12:06:07.279006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.792 [2024-10-11 12:06:07.279181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.792 [2024-10-11 12:06:07.279336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.792 [2024-10-11 12:06:07.279343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.792 [2024-10-11 12:06:07.279349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.792 [2024-10-11 12:06:07.281785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.792 [2024-10-11 12:06:07.291097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.792 [2024-10-11 12:06:07.291620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.792 [2024-10-11 12:06:07.291652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.792 [2024-10-11 12:06:07.291661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.792 [2024-10-11 12:06:07.291827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.792 [2024-10-11 12:06:07.291982] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.792 [2024-10-11 12:06:07.291989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.792 [2024-10-11 12:06:07.291995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.792 [2024-10-11 12:06:07.294445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.792 [2024-10-11 12:06:07.303757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.792 [2024-10-11 12:06:07.304218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.792 [2024-10-11 12:06:07.304248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.792 [2024-10-11 12:06:07.304257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.792 [2024-10-11 12:06:07.304427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.792 [2024-10-11 12:06:07.304584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.304591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.304597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.307038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.793 [2024-10-11 12:06:07.316495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.793 [2024-10-11 12:06:07.316992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.793 [2024-10-11 12:06:07.317008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.793 [2024-10-11 12:06:07.317014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.793 [2024-10-11 12:06:07.317173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.793 [2024-10-11 12:06:07.317326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.317334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.317339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.319770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.793 [2024-10-11 12:06:07.329223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.793 [2024-10-11 12:06:07.329759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.793 [2024-10-11 12:06:07.329791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.793 [2024-10-11 12:06:07.329800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.793 [2024-10-11 12:06:07.329966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.793 [2024-10-11 12:06:07.330127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.330136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.330142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.332577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.793 [2024-10-11 12:06:07.341891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.793 [2024-10-11 12:06:07.342440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.793 [2024-10-11 12:06:07.342472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.793 [2024-10-11 12:06:07.342481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.793 [2024-10-11 12:06:07.342647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.793 [2024-10-11 12:06:07.342801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.342808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.342814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.345263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.793 [2024-10-11 12:06:07.354573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.793 [2024-10-11 12:06:07.355073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.793 [2024-10-11 12:06:07.355088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.793 [2024-10-11 12:06:07.355095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.793 [2024-10-11 12:06:07.355246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.793 [2024-10-11 12:06:07.355397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.355404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.355409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.357841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.793 [2024-10-11 12:06:07.367296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.793 [2024-10-11 12:06:07.367877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.793 [2024-10-11 12:06:07.367908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.793 [2024-10-11 12:06:07.367917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.793 [2024-10-11 12:06:07.368092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.793 [2024-10-11 12:06:07.368248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.368255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.368261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.370707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.793 [2024-10-11 12:06:07.380019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.793 [2024-10-11 12:06:07.380614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.793 [2024-10-11 12:06:07.380645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.793 [2024-10-11 12:06:07.380654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.793 [2024-10-11 12:06:07.380821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.793 [2024-10-11 12:06:07.380975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.380982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.380988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.383431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.793 [2024-10-11 12:06:07.392767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.793 [2024-10-11 12:06:07.393398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.793 [2024-10-11 12:06:07.393433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.793 [2024-10-11 12:06:07.393441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.793 [2024-10-11 12:06:07.393608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.793 [2024-10-11 12:06:07.393763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.393770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.393777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.396220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.793 [2024-10-11 12:06:07.405388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.793 [2024-10-11 12:06:07.405929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.793 [2024-10-11 12:06:07.405961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.793 [2024-10-11 12:06:07.405970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.793 [2024-10-11 12:06:07.406142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.793 [2024-10-11 12:06:07.406296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.793 [2024-10-11 12:06:07.406303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.793 [2024-10-11 12:06:07.406309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.793 [2024-10-11 12:06:07.408745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.794 [2024-10-11 12:06:07.418055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.794 [2024-10-11 12:06:07.418493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.794 [2024-10-11 12:06:07.418525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.794 [2024-10-11 12:06:07.418534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.794 [2024-10-11 12:06:07.418702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.794 [2024-10-11 12:06:07.418856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.794 [2024-10-11 12:06:07.418864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.794 [2024-10-11 12:06:07.418870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.794 [2024-10-11 12:06:07.421315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.794 [2024-10-11 12:06:07.430773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.794 [2024-10-11 12:06:07.431346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.794 [2024-10-11 12:06:07.431378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.794 [2024-10-11 12:06:07.431387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.794 [2024-10-11 12:06:07.431554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.794 [2024-10-11 12:06:07.431712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.794 [2024-10-11 12:06:07.431719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.794 [2024-10-11 12:06:07.431724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.794 [2024-10-11 12:06:07.434169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.794 [2024-10-11 12:06:07.443479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.794 [2024-10-11 12:06:07.444069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.794 [2024-10-11 12:06:07.444101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.794 [2024-10-11 12:06:07.444109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.794 [2024-10-11 12:06:07.444276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.794 [2024-10-11 12:06:07.444430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.794 [2024-10-11 12:06:07.444437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.794 [2024-10-11 12:06:07.444443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.794 [2024-10-11 12:06:07.446880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.794 [2024-10-11 12:06:07.456188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.794 [2024-10-11 12:06:07.456788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.794 [2024-10-11 12:06:07.456819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.794 [2024-10-11 12:06:07.456828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.794 [2024-10-11 12:06:07.456995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.794 [2024-10-11 12:06:07.457156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.794 [2024-10-11 12:06:07.457164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.794 [2024-10-11 12:06:07.457170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.794 [2024-10-11 12:06:07.459605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.794 [2024-10-11 12:06:07.468916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.794 [2024-10-11 12:06:07.469514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.794 [2024-10-11 12:06:07.469546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.794 [2024-10-11 12:06:07.469555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.794 [2024-10-11 12:06:07.469721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.794 [2024-10-11 12:06:07.469876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.794 [2024-10-11 12:06:07.469883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.794 [2024-10-11 12:06:07.469888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.794 [2024-10-11 12:06:07.472342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.794 5441.60 IOPS, 21.26 MiB/s [2024-10-11T10:06:07.497Z] [2024-10-11 12:06:07.481650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.794 [2024-10-11 12:06:07.482272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.794 [2024-10-11 12:06:07.482303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:04.794 [2024-10-11 12:06:07.482312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:04.794 [2024-10-11 12:06:07.482479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:04.794 [2024-10-11 12:06:07.482633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.794 [2024-10-11 12:06:07.482640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.794 [2024-10-11 12:06:07.482646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.794 [2024-10-11 12:06:07.485090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.055 [2024-10-11 12:06:07.494281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.055 [2024-10-11 12:06:07.494782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.055 [2024-10-11 12:06:07.494797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.055 [2024-10-11 12:06:07.494803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.055 [2024-10-11 12:06:07.494954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.055 [2024-10-11 12:06:07.495113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.055 [2024-10-11 12:06:07.495120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.055 [2024-10-11 12:06:07.495125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.055 [2024-10-11 12:06:07.497557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.055 [2024-10-11 12:06:07.507011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.055 [2024-10-11 12:06:07.507518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.055 [2024-10-11 12:06:07.507550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.055 [2024-10-11 12:06:07.507560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.055 [2024-10-11 12:06:07.507726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.055 [2024-10-11 12:06:07.507881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.055 [2024-10-11 12:06:07.507888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.055 [2024-10-11 12:06:07.507894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.055 [2024-10-11 12:06:07.510340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.055 [2024-10-11 12:06:07.519652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.055 [2024-10-11 12:06:07.520269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.055 [2024-10-11 12:06:07.520300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.055 [2024-10-11 12:06:07.520313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.055 [2024-10-11 12:06:07.520480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.055 [2024-10-11 12:06:07.520634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.055 [2024-10-11 12:06:07.520641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.055 [2024-10-11 12:06:07.520647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.055 [2024-10-11 12:06:07.523092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.055 [2024-10-11 12:06:07.532258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.055 [2024-10-11 12:06:07.532820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.055 [2024-10-11 12:06:07.532852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.055 [2024-10-11 12:06:07.532861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.055 [2024-10-11 12:06:07.533027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.055 [2024-10-11 12:06:07.533190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.055 [2024-10-11 12:06:07.533198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.055 [2024-10-11 12:06:07.533203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.055 [2024-10-11 12:06:07.535641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.055 [2024-10-11 12:06:07.544958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.055 [2024-10-11 12:06:07.545558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.055 [2024-10-11 12:06:07.545590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.055 [2024-10-11 12:06:07.545599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.055 [2024-10-11 12:06:07.545766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.055 [2024-10-11 12:06:07.545920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.545928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.545933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.548377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.557692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.558279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.558311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.558320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.558486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.558641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.558652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.558659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.561105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.570423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.570918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.570934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.570940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.571098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.571257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.571265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.571270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.573704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.583173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.583651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.583683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.583691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.583858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.584012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.584019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.584025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.586469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.595811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.596352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.596384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.596393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.596560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.596715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.596722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.596727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.599170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.608486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.609039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.609077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.609086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.609255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.609409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.609416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.609421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.611862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.621178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.621732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.621764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.621773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.621939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.622102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.622110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.622115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.624551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.633861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.634440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.634472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.634481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.634648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.634802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.634810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.634815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.637256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.646580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.647072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.647104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.647114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.647285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.647439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.647447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.647453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.649894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.659217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.659667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.659698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.659707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.659874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.660028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.660036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.056 [2024-10-11 12:06:07.660041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.056 [2024-10-11 12:06:07.662484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.056 [2024-10-11 12:06:07.671948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.056 [2024-10-11 12:06:07.672499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.056 [2024-10-11 12:06:07.672531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.056 [2024-10-11 12:06:07.672539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.056 [2024-10-11 12:06:07.672706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.056 [2024-10-11 12:06:07.672861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.056 [2024-10-11 12:06:07.672868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.057 [2024-10-11 12:06:07.672874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.057 [2024-10-11 12:06:07.675320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.057 [2024-10-11 12:06:07.684631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.057 [2024-10-11 12:06:07.685086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.057 [2024-10-11 12:06:07.685105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.057 [2024-10-11 12:06:07.685111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.057 [2024-10-11 12:06:07.685264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.057 [2024-10-11 12:06:07.685416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.057 [2024-10-11 12:06:07.685423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.057 [2024-10-11 12:06:07.685432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.057 [2024-10-11 12:06:07.687866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.057 [2024-10-11 12:06:07.697333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.057 [2024-10-11 12:06:07.697883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.057 [2024-10-11 12:06:07.697914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.057 [2024-10-11 12:06:07.697923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.057 [2024-10-11 12:06:07.698098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.057 [2024-10-11 12:06:07.698253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.057 [2024-10-11 12:06:07.698260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.057 [2024-10-11 12:06:07.698266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.057 [2024-10-11 12:06:07.700703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.057 [2024-10-11 12:06:07.710015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.057 [2024-10-11 12:06:07.710544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.057 [2024-10-11 12:06:07.710576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.057 [2024-10-11 12:06:07.710585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.057 [2024-10-11 12:06:07.710752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.057 [2024-10-11 12:06:07.710906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.057 [2024-10-11 12:06:07.710913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.057 [2024-10-11 12:06:07.710918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.057 [2024-10-11 12:06:07.713363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.057 [2024-10-11 12:06:07.722695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.057 [2024-10-11 12:06:07.723297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.057 [2024-10-11 12:06:07.723329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.057 [2024-10-11 12:06:07.723338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.057 [2024-10-11 12:06:07.723505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.057 [2024-10-11 12:06:07.723660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.057 [2024-10-11 12:06:07.723667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.057 [2024-10-11 12:06:07.723672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.057 [2024-10-11 12:06:07.726114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.057 [2024-10-11 12:06:07.735423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.057 [2024-10-11 12:06:07.736023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.057 [2024-10-11 12:06:07.736055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.057 [2024-10-11 12:06:07.736070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.057 [2024-10-11 12:06:07.736237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.057 [2024-10-11 12:06:07.736392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.057 [2024-10-11 12:06:07.736399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.057 [2024-10-11 12:06:07.736404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.057 [2024-10-11 12:06:07.738839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.057 [2024-10-11 12:06:07.748155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.057 [2024-10-11 12:06:07.748745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.057 [2024-10-11 12:06:07.748776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.057 [2024-10-11 12:06:07.748785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.057 [2024-10-11 12:06:07.748951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.057 [2024-10-11 12:06:07.749113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.057 [2024-10-11 12:06:07.749121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.057 [2024-10-11 12:06:07.749127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.057 [2024-10-11 12:06:07.751565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.319 [2024-10-11 12:06:07.760887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.319 [2024-10-11 12:06:07.761382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.319 [2024-10-11 12:06:07.761398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.319 [2024-10-11 12:06:07.761403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.319 [2024-10-11 12:06:07.761555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.319 [2024-10-11 12:06:07.761706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.319 [2024-10-11 12:06:07.761712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.319 [2024-10-11 12:06:07.761719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.319 [2024-10-11 12:06:07.764158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.319 [2024-10-11 12:06:07.773617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.319 [2024-10-11 12:06:07.774175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.319 [2024-10-11 12:06:07.774207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.319 [2024-10-11 12:06:07.774216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.319 [2024-10-11 12:06:07.774384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.319 [2024-10-11 12:06:07.774542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.319 [2024-10-11 12:06:07.774550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.319 [2024-10-11 12:06:07.774555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.319 [2024-10-11 12:06:07.776998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.319 [2024-10-11 12:06:07.786310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.319 [2024-10-11 12:06:07.786777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.319 [2024-10-11 12:06:07.786792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.319 [2024-10-11 12:06:07.786798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.319 [2024-10-11 12:06:07.786950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.319 [2024-10-11 12:06:07.787107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.319 [2024-10-11 12:06:07.787114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.319 [2024-10-11 12:06:07.787120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.319 [2024-10-11 12:06:07.789549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.319 [2024-10-11 12:06:07.799006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.319 [2024-10-11 12:06:07.799481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.319 [2024-10-11 12:06:07.799494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.319 [2024-10-11 12:06:07.799500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.319 [2024-10-11 12:06:07.799651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.319 [2024-10-11 12:06:07.799802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.319 [2024-10-11 12:06:07.799809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.319 [2024-10-11 12:06:07.799814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.319 [2024-10-11 12:06:07.802245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.319 [2024-10-11 12:06:07.811707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.319 [2024-10-11 12:06:07.812168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.812181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.812186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.812338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.812489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.812496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.812501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.814935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.824388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.824977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.825008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.825018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.825195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.825351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.825358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.825363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.827801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.837127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.837722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.837754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.837763] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.837930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.838093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.838101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.838106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.840544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.849858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.850467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.850499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.850508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.850674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.850829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.850836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.850842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.853288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.862611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.863172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.863207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.863216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.863384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.863539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.863546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.863552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.865994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.875323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.875828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.875843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.875849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.876000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.876156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.876163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.876169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.878600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.888055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.888523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.888537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.888542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.888693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.888845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.888852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.888857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.891289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.900748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.901286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.901319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.901327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.901494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.901656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.901664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.901669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.904113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.913434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.914026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.914058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.914074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.914241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.914396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.914403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.914409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.916845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.926162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.926763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.926795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.926804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.926970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.320 [2024-10-11 12:06:07.927134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.320 [2024-10-11 12:06:07.927142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.320 [2024-10-11 12:06:07.927147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.320 [2024-10-11 12:06:07.929584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.320 [2024-10-11 12:06:07.938896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.320 [2024-10-11 12:06:07.939453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.320 [2024-10-11 12:06:07.939485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.320 [2024-10-11 12:06:07.939494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.320 [2024-10-11 12:06:07.939661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.321 [2024-10-11 12:06:07.939815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.321 [2024-10-11 12:06:07.939822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.321 [2024-10-11 12:06:07.939828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.321 [2024-10-11 12:06:07.942271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.321 [2024-10-11 12:06:07.951587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.321 [2024-10-11 12:06:07.952103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.321 [2024-10-11 12:06:07.952125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.321 [2024-10-11 12:06:07.952131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.321 [2024-10-11 12:06:07.952289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.321 [2024-10-11 12:06:07.952441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.321 [2024-10-11 12:06:07.952447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.321 [2024-10-11 12:06:07.952453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.321 [2024-10-11 12:06:07.954888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.321 [2024-10-11 12:06:07.964199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.321 [2024-10-11 12:06:07.964789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.321 [2024-10-11 12:06:07.964821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.321 [2024-10-11 12:06:07.964830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.321 [2024-10-11 12:06:07.964997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.321 [2024-10-11 12:06:07.965160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.321 [2024-10-11 12:06:07.965168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.321 [2024-10-11 12:06:07.965173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.321 [2024-10-11 12:06:07.967610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.321 [2024-10-11 12:06:07.976995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.321 [2024-10-11 12:06:07.977450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.321 [2024-10-11 12:06:07.977482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.321 [2024-10-11 12:06:07.977491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.321 [2024-10-11 12:06:07.977658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.321 [2024-10-11 12:06:07.977813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.321 [2024-10-11 12:06:07.977820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.321 [2024-10-11 12:06:07.977825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.321 [2024-10-11 12:06:07.980270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.321 [2024-10-11 12:06:07.989724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.321 [2024-10-11 12:06:07.990326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.321 [2024-10-11 12:06:07.990358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.321 [2024-10-11 12:06:07.990370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.321 [2024-10-11 12:06:07.990536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.321 [2024-10-11 12:06:07.990691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.321 [2024-10-11 12:06:07.990698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.321 [2024-10-11 12:06:07.990704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.321 [2024-10-11 12:06:07.993156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.321 [2024-10-11 12:06:08.002469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.321 [2024-10-11 12:06:08.003069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.321 [2024-10-11 12:06:08.003101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.321 [2024-10-11 12:06:08.003110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.321 [2024-10-11 12:06:08.003279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.321 [2024-10-11 12:06:08.003434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.321 [2024-10-11 12:06:08.003441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.321 [2024-10-11 12:06:08.003446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.321 [2024-10-11 12:06:08.005885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.321 [2024-10-11 12:06:08.015202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.321 [2024-10-11 12:06:08.015768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.321 [2024-10-11 12:06:08.015800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.321 [2024-10-11 12:06:08.015809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.321 [2024-10-11 12:06:08.015976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.321 [2024-10-11 12:06:08.016138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.321 [2024-10-11 12:06:08.016147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.321 [2024-10-11 12:06:08.016153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.321 [2024-10-11 12:06:08.018591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.584 [2024-10-11 12:06:08.027913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.584 [2024-10-11 12:06:08.028362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.584 [2024-10-11 12:06:08.028394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.584 [2024-10-11 12:06:08.028403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.584 [2024-10-11 12:06:08.028570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.584 [2024-10-11 12:06:08.028725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.584 [2024-10-11 12:06:08.028735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.584 [2024-10-11 12:06:08.028742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.584 [2024-10-11 12:06:08.031187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2108061 Killed "${NVMF_APP[@]}" "$@" 00:29:05.584 [2024-10-11 12:06:08.040647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:05.584 [2024-10-11 12:06:08.041190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.584 [2024-10-11 12:06:08.041224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.584 [2024-10-11 12:06:08.041233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.584 [2024-10-11 12:06:08.041402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.584 [2024-10-11 12:06:08.041557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.584 [2024-10-11 12:06:08.041564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.584 [2024-10-11 12:06:08.041569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.584 [2024-10-11 12:06:08.044011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2109826 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2109826 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2109826 ']' 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.584 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.584 [2024-10-11 12:06:08.053333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.584 [2024-10-11 12:06:08.053790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.584 [2024-10-11 12:06:08.053806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.584 [2024-10-11 12:06:08.053813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.584 [2024-10-11 12:06:08.053965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.584 [2024-10-11 12:06:08.054123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.584 [2024-10-11 12:06:08.054134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.584 [2024-10-11 12:06:08.054141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.584 [2024-10-11 12:06:08.056574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.584 [2024-10-11 12:06:08.066031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.584 [2024-10-11 12:06:08.066467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.584 [2024-10-11 12:06:08.066480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.584 [2024-10-11 12:06:08.066486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.584 [2024-10-11 12:06:08.066637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.584 [2024-10-11 12:06:08.066788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.584 [2024-10-11 12:06:08.066794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.584 [2024-10-11 12:06:08.066800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.584 [2024-10-11 12:06:08.069231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.584 [2024-10-11 12:06:08.078697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.584 [2024-10-11 12:06:08.079297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.584 [2024-10-11 12:06:08.079329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.584 [2024-10-11 12:06:08.079338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.584 [2024-10-11 12:06:08.079505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.584 [2024-10-11 12:06:08.079659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.584 [2024-10-11 12:06:08.079667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.584 [2024-10-11 12:06:08.079673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.584 [2024-10-11 12:06:08.082116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.584 [2024-10-11 12:06:08.091432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.584 [2024-10-11 12:06:08.091933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.584 [2024-10-11 12:06:08.091948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.584 [2024-10-11 12:06:08.091954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.584 [2024-10-11 12:06:08.092111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.092264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.092270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.092277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.094715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.103692] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:29:05.585 [2024-10-11 12:06:08.103743] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.585 [2024-10-11 12:06:08.104180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.104671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.104686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.104692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.104845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.104997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.105005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.105011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.107449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.116920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.117363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.117377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.117384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.117535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.117687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.117694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.117699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.120137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.129608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.130059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.130079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.130085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.130236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.130387] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.130394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.130400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.132834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.142394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.142976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.143007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.143017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.143195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.143350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.143357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.143363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.145805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.155011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.155533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.155565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.155576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.155745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.155900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.155907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.155913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.158359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.167684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.168111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.168133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.168140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.168298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.168452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.168460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.168466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.170905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.180378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.180950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.180983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.180992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.181169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.181325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.181333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.181339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.183776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.188826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.585 [2024-10-11 12:06:08.193106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.193575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.193590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.193597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.193748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.193900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.193908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.193913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.196350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.205811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.206405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.206438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.585 [2024-10-11 12:06:08.206447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.585 [2024-10-11 12:06:08.206614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.585 [2024-10-11 12:06:08.206769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.585 [2024-10-11 12:06:08.206776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.585 [2024-10-11 12:06:08.206783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.585 [2024-10-11 12:06:08.209226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.585 [2024-10-11 12:06:08.218292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.585 [2024-10-11 12:06:08.218317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.585 [2024-10-11 12:06:08.218324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.585 [2024-10-11 12:06:08.218329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.585 [2024-10-11 12:06:08.218333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.585 [2024-10-11 12:06:08.218548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.585 [2024-10-11 12:06:08.218945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.585 [2024-10-11 12:06:08.218964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.586 [2024-10-11 12:06:08.218970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.586 [2024-10-11 12:06:08.219126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.586 [2024-10-11 12:06:08.219278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.586 [2024-10-11 12:06:08.219285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.586 [2024-10-11 12:06:08.219290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.586 [2024-10-11 12:06:08.219467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.586 [2024-10-11 12:06:08.219619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.586 [2024-10-11 12:06:08.219620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.586 [2024-10-11 12:06:08.221725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.586 [2024-10-11 12:06:08.231197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.586 [2024-10-11 12:06:08.231708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.586 [2024-10-11 12:06:08.231722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.586 [2024-10-11 12:06:08.231729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.586 [2024-10-11 12:06:08.231880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.586 [2024-10-11 12:06:08.232033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.586 [2024-10-11 12:06:08.232039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.586 [2024-10-11 12:06:08.232045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.586 [2024-10-11 12:06:08.234481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.586 [2024-10-11 12:06:08.243806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.586 [2024-10-11 12:06:08.244281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.586 [2024-10-11 12:06:08.244297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.586 [2024-10-11 12:06:08.244303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.586 [2024-10-11 12:06:08.244455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.586 [2024-10-11 12:06:08.244607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.586 [2024-10-11 12:06:08.244615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.586 [2024-10-11 12:06:08.244620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.586 [2024-10-11 12:06:08.247052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.586 [2024-10-11 12:06:08.256519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.586 [2024-10-11 12:06:08.257155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.586 [2024-10-11 12:06:08.257192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.586 [2024-10-11 12:06:08.257208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.586 [2024-10-11 12:06:08.257382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.586 [2024-10-11 12:06:08.257537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.586 [2024-10-11 12:06:08.257545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.586 [2024-10-11 12:06:08.257551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.586 [2024-10-11 12:06:08.259995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.586 [2024-10-11 12:06:08.269178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.586 [2024-10-11 12:06:08.269688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.586 [2024-10-11 12:06:08.269703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.586 [2024-10-11 12:06:08.269709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.586 [2024-10-11 12:06:08.269861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.586 [2024-10-11 12:06:08.270012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.586 [2024-10-11 12:06:08.270019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.586 [2024-10-11 12:06:08.270024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.586 [2024-10-11 12:06:08.272467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.586 [2024-10-11 12:06:08.281797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.586 [2024-10-11 12:06:08.282290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.586 [2024-10-11 12:06:08.282323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.586 [2024-10-11 12:06:08.282332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.586 [2024-10-11 12:06:08.282499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.586 [2024-10-11 12:06:08.282654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.586 [2024-10-11 12:06:08.282661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.586 [2024-10-11 12:06:08.282667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.586 [2024-10-11 12:06:08.285110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.294444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.295068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.295101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.295109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.295276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.295430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.295441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.295448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.297888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.307067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.307679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.307712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.307720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.307887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.308041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.308048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.308054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.310499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.319678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.320368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.320400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.320409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.320576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.320731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.320738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.320744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.323188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.332362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.332827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.332845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.332851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.333002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.333159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.333165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.333171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.335603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.345061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.345536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.345549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.345555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.345706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.345858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.345866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.345871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.348304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.357762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.358226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.358239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.358245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.358396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.358547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.358554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.358560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.361065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.370386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.370993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.371025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.371035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.371209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.371364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.371371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.371381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.373826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.383008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.383520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.383535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.383541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.383696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.383848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.383855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.383860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.386296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.395620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.396165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.396197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.396206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.396375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.396530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.396537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.396542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.398983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.408306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.408769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.408784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.408790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.408942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.409097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.409104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.409109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.411540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.421000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.421507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.421523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.849 [2024-10-11 12:06:08.421528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.849 [2024-10-11 12:06:08.421680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.849 [2024-10-11 12:06:08.421833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.849 [2024-10-11 12:06:08.421839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.849 [2024-10-11 12:06:08.421848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.849 [2024-10-11 12:06:08.424284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.849 [2024-10-11 12:06:08.433611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.849 [2024-10-11 12:06:08.434127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.849 [2024-10-11 12:06:08.434141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.434147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.434298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.434450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.434457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.434463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.436895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-10-11 12:06:08.446357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.446854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.446869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.446874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.447026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.447182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.447190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.447195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.449625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-10-11 12:06:08.459087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.459557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.459590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.459600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.459770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.459925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.459932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.459938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.462381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-10-11 12:06:08.471699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.472058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.472087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.472094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.472246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.472399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.472406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.472412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.474854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 4534.67 IOPS, 17.71 MiB/s [2024-10-11T10:06:08.553Z] [2024-10-11 12:06:08.484316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.484817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.484832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.484838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.484989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.485145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.485153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.485159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.487589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-10-11 12:06:08.497058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.497514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.497528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.497533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.497685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.497838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.497846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.497852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.500287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-10-11 12:06:08.509749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.510215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.510247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.510257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.510424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.510587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.510595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.510601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.513044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-10-11 12:06:08.522373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.522869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.522902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.522914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.523090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.523247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.523255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.523260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.525697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-10-11 12:06:08.535016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.535263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.535285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.535292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.535450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.535604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.535611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.535617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.538060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.850 [2024-10-11 12:06:08.547672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.850 [2024-10-11 12:06:08.548138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.850 [2024-10-11 12:06:08.548154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:05.850 [2024-10-11 12:06:08.548160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:05.850 [2024-10-11 12:06:08.548313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:05.850 [2024-10-11 12:06:08.548465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.850 [2024-10-11 12:06:08.548473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.850 [2024-10-11 12:06:08.548478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.850 [2024-10-11 12:06:08.550916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.113 [2024-10-11 12:06:08.560380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.113 [2024-10-11 12:06:08.560882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.113 [2024-10-11 12:06:08.560896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.113 [2024-10-11 12:06:08.560902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.113 [2024-10-11 12:06:08.561054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.113 [2024-10-11 12:06:08.561211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.113 [2024-10-11 12:06:08.561219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.113 [2024-10-11 12:06:08.561224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.113 [2024-10-11 12:06:08.563655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.113 [2024-10-11 12:06:08.573117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.113 [2024-10-11 12:06:08.573475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.113 [2024-10-11 12:06:08.573488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.113 [2024-10-11 12:06:08.573494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.113 [2024-10-11 12:06:08.573645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.113 [2024-10-11 12:06:08.573797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.113 [2024-10-11 12:06:08.573804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.113 [2024-10-11 12:06:08.573810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.113 [2024-10-11 12:06:08.576253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.113 [2024-10-11 12:06:08.585860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.113 [2024-10-11 12:06:08.586359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.113 [2024-10-11 12:06:08.586373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.113 [2024-10-11 12:06:08.586379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.113 [2024-10-11 12:06:08.586530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.113 [2024-10-11 12:06:08.586683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.113 [2024-10-11 12:06:08.586691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.113 [2024-10-11 12:06:08.586697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.113 [2024-10-11 12:06:08.589132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.113 [2024-10-11 12:06:08.598516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.113 [2024-10-11 12:06:08.598870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.113 [2024-10-11 12:06:08.598885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.113 [2024-10-11 12:06:08.598894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.599045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.599203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.599210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.599216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.601649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.611253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.611751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.611765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.611771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.611922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.612080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.612087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.612092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.614524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.623985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.624459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.624491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.624501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.624668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.624823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.624831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.624837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.627282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.636599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.637060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.637081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.637087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.637239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.637392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.637403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.637408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.639840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.649306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.649879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.649912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.649921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.650093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.650249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.650257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.650263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.652700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.662022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.662536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.662552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.662558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.662710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.662863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.662870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.662875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.665311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.674649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.675152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.675166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.675173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.675324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.675476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.675483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.675488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.677922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.687392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.687890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.687903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.687910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.688069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.688223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.688230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.688235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.690667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.700139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.700591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.700605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.700611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.700763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.700915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.700924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.700929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.703368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.712829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.713356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.713389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.713399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.713566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.713721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.713729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.713735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.716179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.725505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.114 [2024-10-11 12:06:08.725964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.114 [2024-10-11 12:06:08.725980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.114 [2024-10-11 12:06:08.725990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.114 [2024-10-11 12:06:08.726147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.114 [2024-10-11 12:06:08.726300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.114 [2024-10-11 12:06:08.726308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.114 [2024-10-11 12:06:08.726313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.114 [2024-10-11 12:06:08.728747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.114 [2024-10-11 12:06:08.738209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.115 [2024-10-11 12:06:08.738544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.115 [2024-10-11 12:06:08.738559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.115 [2024-10-11 12:06:08.738565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.115 [2024-10-11 12:06:08.738718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.115 [2024-10-11 12:06:08.738871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.115 [2024-10-11 12:06:08.738878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.115 [2024-10-11 12:06:08.738883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.115 [2024-10-11 12:06:08.741322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.115 [2024-10-11 12:06:08.750865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.115 [2024-10-11 12:06:08.751302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.115 [2024-10-11 12:06:08.751316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.115 [2024-10-11 12:06:08.751322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.115 [2024-10-11 12:06:08.751473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.115 [2024-10-11 12:06:08.751626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.115 [2024-10-11 12:06:08.751634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.115 [2024-10-11 12:06:08.751639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.115 [2024-10-11 12:06:08.754075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.115 [2024-10-11 12:06:08.763538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.115 [2024-10-11 12:06:08.763983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.115 [2024-10-11 12:06:08.763996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.115 [2024-10-11 12:06:08.764003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.115 [2024-10-11 12:06:08.764158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.115 [2024-10-11 12:06:08.764311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.115 [2024-10-11 12:06:08.764321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.115 [2024-10-11 12:06:08.764327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.115 [2024-10-11 12:06:08.766759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.115 [2024-10-11 12:06:08.776232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.115 [2024-10-11 12:06:08.776576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.115 [2024-10-11 12:06:08.776589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.115 [2024-10-11 12:06:08.776595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.115 [2024-10-11 12:06:08.776746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.115 [2024-10-11 12:06:08.776899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.115 [2024-10-11 12:06:08.776906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.115 [2024-10-11 12:06:08.776911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.115 [2024-10-11 12:06:08.779348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.115 [2024-10-11 12:06:08.788948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.115 [2024-10-11 12:06:08.789411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.115 [2024-10-11 12:06:08.789444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.115 [2024-10-11 12:06:08.789453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.115 [2024-10-11 12:06:08.789621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.115 [2024-10-11 12:06:08.789777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.115 [2024-10-11 12:06:08.789785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.115 [2024-10-11 12:06:08.789790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.115 [2024-10-11 12:06:08.792247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.115 [2024-10-11 12:06:08.801571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.115 [2024-10-11 12:06:08.802105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.115 [2024-10-11 12:06:08.802127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.115 [2024-10-11 12:06:08.802134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.115 [2024-10-11 12:06:08.802292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.115 [2024-10-11 12:06:08.802446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.115 [2024-10-11 12:06:08.802454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.115 [2024-10-11 12:06:08.802459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.115 [2024-10-11 12:06:08.804896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.115 [2024-10-11 12:06:08.814216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.115 [2024-10-11 12:06:08.814680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.115 [2024-10-11 12:06:08.814694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.115 [2024-10-11 12:06:08.814701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.115 [2024-10-11 12:06:08.814852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.115 [2024-10-11 12:06:08.815005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.115 [2024-10-11 12:06:08.815014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.115 [2024-10-11 12:06:08.815019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.377 [2024-10-11 12:06:08.817455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.377 [2024-10-11 12:06:08.826913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.377 [2024-10-11 12:06:08.827381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.377 [2024-10-11 12:06:08.827395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.377 [2024-10-11 12:06:08.827401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.377 [2024-10-11 12:06:08.827552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.377 [2024-10-11 12:06:08.827705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.377 [2024-10-11 12:06:08.827713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.377 [2024-10-11 12:06:08.827718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.377 [2024-10-11 12:06:08.830154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.377 [2024-10-11 12:06:08.839609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.377 [2024-10-11 12:06:08.840200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.377 [2024-10-11 12:06:08.840232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.377 [2024-10-11 12:06:08.840243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.377 [2024-10-11 12:06:08.840413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.377 [2024-10-11 12:06:08.840568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.377 [2024-10-11 12:06:08.840577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.377 [2024-10-11 12:06:08.840584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.377 [2024-10-11 12:06:08.843029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.377 [2024-10-11 12:06:08.852358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.852950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.852983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.852992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 [2024-10-11 12:06:08.853171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.853327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.853335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.853341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 [2024-10-11 12:06:08.855779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 [2024-10-11 12:06:08.865104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.865597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.865613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.865620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 [2024-10-11 12:06:08.865772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.865925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.865932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.865938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 [2024-10-11 12:06:08.868376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 [2024-10-11 12:06:08.877724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.878072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.878088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.878094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 [2024-10-11 12:06:08.878246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.878399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.878406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.878411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 [2024-10-11 12:06:08.880842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 [2024-10-11 12:06:08.890444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.890894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.890907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.890913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 [2024-10-11 12:06:08.891069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.891223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.891230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.891239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 [2024-10-11 12:06:08.893681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 [2024-10-11 12:06:08.903146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.378 [2024-10-11 12:06:08.903743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.903777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.903786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:06.378 [2024-10-11 12:06:08.903953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.904115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.904123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.904129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.378 [2024-10-11 12:06:08.906565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 [2024-10-11 12:06:08.915889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.916376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.916392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.916399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 [2024-10-11 12:06:08.916552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.916704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.916713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.916720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 [2024-10-11 12:06:08.919160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 [2024-10-11 12:06:08.928622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.929084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.929099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.929105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 [2024-10-11 12:06:08.929257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.929409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.929417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.929427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 [2024-10-11 12:06:08.931863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 [2024-10-11 12:06:08.941330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.941827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.941841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.941847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 [2024-10-11 12:06:08.941998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.942155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.942163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.942169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.378 [2024-10-11 12:06:08.944600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.378 [2024-10-11 12:06:08.951513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.378 [2024-10-11 12:06:08.954056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.954554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.954567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.954573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.378 [2024-10-11 12:06:08.954725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.378 [2024-10-11 12:06:08.954877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.378 [2024-10-11 12:06:08.954884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.378 [2024-10-11 12:06:08.954889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.378 [2024-10-11 12:06:08.957324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.378 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.378 [2024-10-11 12:06:08.966778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.378 [2024-10-11 12:06:08.967349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.378 [2024-10-11 12:06:08.967382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.378 [2024-10-11 12:06:08.967391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.379 [2024-10-11 12:06:08.967563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.379 [2024-10-11 12:06:08.967718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.379 [2024-10-11 12:06:08.967725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.379 [2024-10-11 12:06:08.967731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.379 [2024-10-11 12:06:08.970171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.379 [2024-10-11 12:06:08.979495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.379 [2024-10-11 12:06:08.979963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-10-11 12:06:08.979979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.379 [2024-10-11 12:06:08.979985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.379 [2024-10-11 12:06:08.980143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.379 [2024-10-11 12:06:08.980296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.379 [2024-10-11 12:06:08.980303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.379 [2024-10-11 12:06:08.980309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.379 [2024-10-11 12:06:08.982739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.379 Malloc0 00:29:06.379 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.379 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.379 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.379 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.379 [2024-10-11 12:06:08.992201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.379 [2024-10-11 12:06:08.992752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-10-11 12:06:08.992785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.379 [2024-10-11 12:06:08.992795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.379 [2024-10-11 12:06:08.992962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.379 [2024-10-11 12:06:08.993125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.379 [2024-10-11 12:06:08.993133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.379 [2024-10-11 12:06:08.993139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.379 [2024-10-11 12:06:08.995575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.379 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.379 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.379 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.379 12:06:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.379 [2024-10-11 12:06:09.004927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.379 [2024-10-11 12:06:09.005496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.379 [2024-10-11 12:06:09.005528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c0540 with addr=10.0.0.2, port=4420 00:29:06.379 [2024-10-11 12:06:09.005538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0540 is same with the state(6) to be set 00:29:06.379 [2024-10-11 12:06:09.005705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c0540 (9): Bad file descriptor 00:29:06.379 [2024-10-11 12:06:09.005861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.379 [2024-10-11 12:06:09.005869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.379 [2024-10-11 12:06:09.005875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.379 [2024-10-11 12:06:09.008319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.379 12:06:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.379 12:06:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.379 12:06:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.379 12:06:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.379 [2024-10-11 12:06:09.017637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.379 [2024-10-11 12:06:09.017891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.379 12:06:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.379 12:06:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2108807 00:29:06.379 [2024-10-11 12:06:09.051378] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:07.891 4678.57 IOPS, 18.28 MiB/s [2024-10-11T10:06:11.535Z] 5715.12 IOPS, 22.32 MiB/s [2024-10-11T10:06:12.920Z] 6523.11 IOPS, 25.48 MiB/s [2024-10-11T10:06:13.491Z] 7166.00 IOPS, 27.99 MiB/s [2024-10-11T10:06:14.875Z] 7686.64 IOPS, 30.03 MiB/s [2024-10-11T10:06:15.816Z] 8117.42 IOPS, 31.71 MiB/s [2024-10-11T10:06:16.769Z] 8487.77 IOPS, 33.16 MiB/s [2024-10-11T10:06:17.710Z] 8798.57 IOPS, 34.37 MiB/s [2024-10-11T10:06:17.710Z] 9086.13 IOPS, 35.49 MiB/s 00:29:15.007 Latency(us) 00:29:15.007 [2024-10-11T10:06:17.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.007 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:15.007 Verification LBA range: start 0x0 length 0x4000 00:29:15.007 Nvme1n1 : 15.01 9088.71 35.50 12832.47 0.00 5819.95 566.61 15947.09 00:29:15.007 [2024-10-11T10:06:17.710Z] =================================================================================================================== 00:29:15.007 [2024-10-11T10:06:17.710Z] Total : 9088.71 35.50 12832.47 0.00 5819.95 566.61 15947.09 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.007 rmmod nvme_tcp 00:29:15.007 rmmod nvme_fabrics 00:29:15.007 rmmod nvme_keyring 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 2109826 ']' 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 2109826 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2109826 ']' 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2109826 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:15.007 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2109826 00:29:15.268 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:15.268 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2109826' 00:29:15.269 killing process with pid 2109826 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2109826 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2109826 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.269 12:06:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.813 12:06:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.813 00:29:17.813 real 0m28.575s 00:29:17.813 user 1m3.786s 00:29:17.813 sys 0m7.851s 00:29:17.813 12:06:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.813 12:06:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.813 ************************************ 00:29:17.813 END TEST nvmf_bdevperf 00:29:17.813 ************************************ 00:29:17.813 12:06:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:17.813 12:06:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:17.813 12:06:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:17.813 12:06:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.813 ************************************ 00:29:17.813 START TEST nvmf_target_disconnect 00:29:17.813 ************************************ 00:29:17.813 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:17.813 * Looking for test storage... 00:29:17.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.813 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:17.813 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:17.813 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:17.813 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:17.813 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:17.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.814 --rc genhtml_branch_coverage=1 00:29:17.814 --rc genhtml_function_coverage=1 00:29:17.814 --rc genhtml_legend=1 00:29:17.814 --rc geninfo_all_blocks=1 00:29:17.814 --rc geninfo_unexecuted_blocks=1 00:29:17.814 00:29:17.814 ' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:17.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.814 --rc genhtml_branch_coverage=1 00:29:17.814 --rc genhtml_function_coverage=1 00:29:17.814 --rc genhtml_legend=1 00:29:17.814 --rc geninfo_all_blocks=1 00:29:17.814 --rc geninfo_unexecuted_blocks=1 00:29:17.814 00:29:17.814 ' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:17.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.814 --rc genhtml_branch_coverage=1 00:29:17.814 --rc genhtml_function_coverage=1 00:29:17.814 --rc genhtml_legend=1 00:29:17.814 --rc geninfo_all_blocks=1 00:29:17.814 --rc geninfo_unexecuted_blocks=1 00:29:17.814 00:29:17.814 ' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:17.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.814 --rc genhtml_branch_coverage=1 00:29:17.814 --rc genhtml_function_coverage=1 00:29:17.814 --rc genhtml_legend=1 00:29:17.814 --rc geninfo_all_blocks=1 00:29:17.814 --rc geninfo_unexecuted_blocks=1 00:29:17.814 00:29:17.814 ' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.814 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.815 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:17.815 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:17.815 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.815 12:06:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:25.961 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:25.961 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:25.961 Found net devices under 0000:31:00.0: cvl_0_0 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:25.961 Found net devices under 0000:31:00.1: cvl_0_1 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.961 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:29:25.962 00:29:25.962 --- 10.0.0.2 ping statistics --- 00:29:25.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.962 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:29:25.962 00:29:25.962 --- 10.0.0.1 ping statistics --- 00:29:25.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.962 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:25.962 12:06:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.962 ************************************ 00:29:25.962 START TEST nvmf_target_disconnect_tc1 00:29:25.962 ************************************ 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.962 [2024-10-11 12:06:28.154848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.962 [2024-10-11 12:06:28.154941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169f2b0 with addr=10.0.0.2, port=4420 00:29:25.962 [2024-10-11 12:06:28.154988] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:25.962 [2024-10-11 12:06:28.155009] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:25.962 [2024-10-11 12:06:28.155022] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:25.962 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:25.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:25.962 Initializing NVMe Controllers 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.962 00:29:25.962 real 0m0.134s 00:29:25.962 user 0m0.052s 00:29:25.962 sys 0m0.082s 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:25.962 ************************************ 00:29:25.962 END TEST nvmf_target_disconnect_tc1 00:29:25.962 ************************************ 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.962 ************************************ 00:29:25.962 START TEST nvmf_target_disconnect_tc2 00:29:25.962 ************************************ 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2116637 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2116637 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2116637 ']' 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:25.962 12:06:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.962 [2024-10-11 12:06:28.316800] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:29:25.962 [2024-10-11 12:06:28.316860] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.962 [2024-10-11 12:06:28.406229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.962 [2024-10-11 12:06:28.459003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.962 [2024-10-11 12:06:28.459054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.962 [2024-10-11 12:06:28.459071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.962 [2024-10-11 12:06:28.459079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.962 [2024-10-11 12:06:28.459087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.962 [2024-10-11 12:06:28.461136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:25.962 [2024-10-11 12:06:28.461315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:25.962 [2024-10-11 12:06:28.461478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.962 [2024-10-11 12:06:28.461479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.536 Malloc0 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.536 [2024-10-11 12:06:29.226623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.536 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.840 [2024-10-11 12:06:29.266982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2116727 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:26.840 12:06:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.806 12:06:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2116637 00:29:28.806 12:06:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Write completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.806 Read completed with error (sct=0, sc=8) 00:29:28.806 starting I/O failed 00:29:28.807 Write completed with error (sct=0, sc=8) 00:29:28.807 starting I/O failed 00:29:28.807 Write completed with error (sct=0, sc=8) 00:29:28.807 starting I/O failed 00:29:28.807 Read completed with error (sct=0, sc=8) 00:29:28.807 starting I/O failed 00:29:28.807 Write completed with error (sct=0, sc=8) 00:29:28.807 starting I/O failed 00:29:28.807 Read completed with error (sct=0, sc=8) 00:29:28.807 starting I/O failed 00:29:28.807 [2024-10-11 12:06:31.305532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.807 [2024-10-11 12:06:31.305860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.305897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.306039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.306059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.306446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.306500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.306743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.306761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.307301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.307358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.307616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.307634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.307778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.307794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.308130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.308145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.308464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.308479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.308691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.308705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.308983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.308997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.309122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.309141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.309463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.309477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.309664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.309681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.309958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.309973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.310315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.310332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.310628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.310642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.310902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.310916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.311185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.311200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.311518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.311533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.311891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.311906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.312106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.312122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.312476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.312492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.312798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.312814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.312966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.312981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.313338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.313353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.313680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.313695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.314031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.314046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.314383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.314398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.314718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.314733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.315075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.315091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.315458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.315472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.315871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.315886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.316196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.316211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.316538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.316553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.316910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.316924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.317260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.317276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.807 [2024-10-11 12:06:31.317582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.807 [2024-10-11 12:06:31.317597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.807 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.317963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.317977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.318312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.318327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.318520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.318534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.318895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.318909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.319268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.319282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.319626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.319641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.319964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.319979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.320305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.320320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.320503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.320518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.320780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.320795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.320972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.320987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.321311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.321327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.321651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.321665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.321988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.322005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.322317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.322331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.322632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.322646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.322954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.322968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.323278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.323293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.323600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.323615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.323789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.323803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.324125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.324139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.324369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.324384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.324688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.324704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.325014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.325029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.325403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.325420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.325768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.325784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.326100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.326116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.326423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.326439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.326792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.326808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.327031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.327048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.327282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.327299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.327533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.327550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.327873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.327889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.328184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.328200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.328417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.328433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.328778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.328794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.329003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.329020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.329350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.329366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.329726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.329742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.330077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.808 [2024-10-11 12:06:31.330093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.808 qpair failed and we were unable to recover it. 00:29:28.808 [2024-10-11 12:06:31.330435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.330452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.330757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.330773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.331104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.331121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.331410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.331426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.331794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.331810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.332166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.332182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.332542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.332559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.332905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.332920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.333239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.333256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.333599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.333615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.333803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.333820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.334114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.334130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.334455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.334471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.334815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.334837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.335182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.335199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.335549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.335565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.335917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.335935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.336130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.336150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.336489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.336508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.336929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.336948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.337245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.337265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.337596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.337615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.337934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.337953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.338283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.338303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.338645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.338663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.338989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.339009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.339355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.339374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.339709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.339727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.340041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.340060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.340285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.340306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.340624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.340643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.340962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.340981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.341284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.341304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.341634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.341654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.341838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.341859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.342086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.342106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.342410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.342430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.342771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.342790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.343007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.343026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.809 [2024-10-11 12:06:31.343351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.809 [2024-10-11 12:06:31.343372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.809 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.343690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.343710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.344044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.344072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.344438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.344457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.344781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.344801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.345133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.345152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.345537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.345556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.345838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.345856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.346179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.346203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.346551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.346576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.346830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.346854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.347189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.347213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.347565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.347589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.347937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.347960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.348347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.348375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.348714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.348738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.349114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.349138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.349495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.349519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.349852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.349876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.350175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.350203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.350533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.350556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.350889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.350913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.351127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.351152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.351524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.351549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.351870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.351893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.352098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.352124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.352522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.352546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.352899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.352924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.353274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.353299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.353655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.353679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.353870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.353895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.354199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.354222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.354562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.354586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.354920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.354944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.355289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.355314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.355671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.355695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.356041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.356077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.356419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.356442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.356778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.356808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.357156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.357188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.357562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.810 [2024-10-11 12:06:31.357591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.810 qpair failed and we were unable to recover it. 00:29:28.810 [2024-10-11 12:06:31.357986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.358017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.358368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.358401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.358782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.358812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.359185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.359217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.359530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.359560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.359950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.359980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.360360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.360391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.360659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.360688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.361006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.361036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.361399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.361428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.361793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.361823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.362200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.362230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.362575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.362603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.362963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.363000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.363378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.363409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.363768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.363797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.364146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.364177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.364548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.364578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.364937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.364967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.365311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.365342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.365639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.365669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.366017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.366046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.366433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.366464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.366828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.366860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.367233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.367263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.367618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.367652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.367917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.367950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.368327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.368361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.368685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.368719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.368964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.368997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.369358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.369392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.369747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.369780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.370142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.370176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.811 [2024-10-11 12:06:31.370558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.811 [2024-10-11 12:06:31.370591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.811 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.370959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.370992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.371359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.371394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.371782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.371814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.372149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.372183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.372562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.372595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.372928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.372960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.373349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.373383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.373733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.373767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.374128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.374162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.374519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.374552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.374793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.374830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.375190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.375224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.375637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.375670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.375863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.375899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.376329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.376363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.376718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.376751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.377108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.377142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.377521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.377554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.377904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.377936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.378304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.378344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.378698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.378733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.379107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.379142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.379503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.379536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.379898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.379931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.380295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.380328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.380560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.380596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.380965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.380999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.381271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.381304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.381651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.381684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.382030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.382073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.382443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.382475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.382888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.382921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.383289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.383323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.383716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.383749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.384113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.384147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.384522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.384555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.384918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.384953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.385318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.385353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.385706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.385739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.812 [2024-10-11 12:06:31.386096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.812 [2024-10-11 12:06:31.386130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.812 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.386479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.386512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.386882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.386915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.387282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.387318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.387663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.387696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.388080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.388114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.388461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.388493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.388857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.388891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.389272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.389305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.389660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.389694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.390015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.390048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.390383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.390415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.390772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.390805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.391139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.391173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.391532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.391564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.391921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.391954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.392291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.392326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.392669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.392703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.393074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.393109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.393491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.393523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.393891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.393931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.394261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.394295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.394648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.394681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.395033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.395078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.395435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.395469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.395826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.395859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.396215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.396250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.396604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.396638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.396984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.397016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.397267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.397303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.397689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.397722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.397955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.397992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.398340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.398373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.398727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.398760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.399129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.399165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.399384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.399418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.399768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.399802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.400143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.400176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.400531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.400564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.400801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.400837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.401094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.813 [2024-10-11 12:06:31.401129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.813 qpair failed and we were unable to recover it. 00:29:28.813 [2024-10-11 12:06:31.401562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.401595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.401946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.401978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.402403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.402437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.402792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.402825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.403177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.403212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.403571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.403604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.403940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.403973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.404328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.404361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.404708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.404741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.405100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.405133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.405542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.405575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.405920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.405952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.406327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.406361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.406789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.406822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.407215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.407248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.407614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.407647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.407887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.407922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.408287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.408321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.408567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.408601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.408953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.408992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.409349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.409383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.409746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.409780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.410125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.410158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.410545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.410577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.410928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.410961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.411326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.411360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.411716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.411750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.412139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.412172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.412435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.412471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.412724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.412758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.413109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.413143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.413506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.413539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.413895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.413927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.414176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.414213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.414455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.414489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.414859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.414891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.415249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.415283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.415639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.415671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.416042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.416085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.416447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.416480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.814 qpair failed and we were unable to recover it. 00:29:28.814 [2024-10-11 12:06:31.416730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.814 [2024-10-11 12:06:31.416766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.417122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.417156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.417509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.417542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.417786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.417821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.418152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.418186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.418558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.418591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.418947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.418981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.419333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.419367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.419723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.419756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.420116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.420150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.420523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.420556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.420895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.420930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.421303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.421337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.421681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.421715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.422084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.422118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.422524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.422557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.422909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.422941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.423305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.423340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.423698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.423731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.424073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.424114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.424337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.424372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.424733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.424766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.425126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.425161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.425524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.425557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.425925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.425958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.426338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.426371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.426716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.426747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.427120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.427154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.427515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.427548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.427909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.427942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.428221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.428258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.428619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.428652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.429008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.429040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.429454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.429488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.429847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.429881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.430241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.815 [2024-10-11 12:06:31.430275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.815 qpair failed and we were unable to recover it. 00:29:28.815 [2024-10-11 12:06:31.430631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.430665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.431011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.431044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.431418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.431452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.431811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.431844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.432201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.432235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.432594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.432627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.432988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.433021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.433391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.433424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.433780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.433814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.434169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.434203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.434558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.434593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.434949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.434982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.435342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.435377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.435725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.435759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.436100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.436134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.436492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.436525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.436891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.436926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.437285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.437319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.437675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.437707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.438079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.438113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.438468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.438502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.438863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.438897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.439250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.439283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.439523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.439562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.439920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.439955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.440245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.440279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.440510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.440546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.440902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.440935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.441265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.441303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.441750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.441782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.442177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.442212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.442470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.442506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.442860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.442893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.443120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.443157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.443527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.443562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.443922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.443955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.444327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.444362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.444766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.444799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.445049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.445095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.445479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.445512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.816 qpair failed and we were unable to recover it. 00:29:28.816 [2024-10-11 12:06:31.445863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.816 [2024-10-11 12:06:31.445897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.446252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.446286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.446641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.446674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.447031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.447075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.447497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.447529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.447882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.447916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.448285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.448319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.448676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.448708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.449077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.449110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.449477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.449509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.449742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.449780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.450145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.450178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.450541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.450574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.450841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.450874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.451120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.451157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.451535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.451567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.451929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.451962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.452211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.452246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.452642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.452676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.453023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.453055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.453306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.453342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.453574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.453607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.453968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.454000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.454357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.454397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.454636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.454669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.454910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.454944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.455311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.455345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.455709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.455743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.456121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.456155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.456505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.456537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.456895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.456930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.457306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.457340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.457690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.457723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.458086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.458121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.458476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.458509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.458868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.458902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.459243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.459277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.459658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.459692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.460050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.460096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.460338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.460371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.817 [2024-10-11 12:06:31.460728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.817 [2024-10-11 12:06:31.460760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.817 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.461124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.461157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.461513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.461546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.461903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.461937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.462307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.462341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.462694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.462727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.463088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.463122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.463518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.463551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.463898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.463931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.464292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.464327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.464661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.464695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.465043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.465087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.465440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.465473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.465838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.465872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.466221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.466254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.466613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.466645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.467022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.467055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.467448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.467482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.467826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.467859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.468099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.468135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.468482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.468516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.468874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.468907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.469268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.469302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.469563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.469601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.469939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.469971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.470331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.470364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.470720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.470753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.471003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.471036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.471446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.471479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.471840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.471874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.472235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.472269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.472626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.472660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.473008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.473041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.474888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.474955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.475360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.475401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.475791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.475825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.476179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.476213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.476565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.476599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.476939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.476973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.477327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.477361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.477719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.818 [2024-10-11 12:06:31.477753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.818 qpair failed and we were unable to recover it. 00:29:28.818 [2024-10-11 12:06:31.478132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.478166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.478420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.478453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.478807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.478839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.479198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.479232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.479468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.479501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.479874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.479908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.480244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.480278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.480637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.480670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.481096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.481130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.481499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.481532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.481887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.481920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.482343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.482378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.482729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.482762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.483201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.483235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.483592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.483626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.484056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.484106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.484464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.484497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.484856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.484889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.485137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.485172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.485551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.485584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.485942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.485975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.486407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.486440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.486792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.486829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.487158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.487192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.487547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.487580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.487829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.487864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.488252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.488285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.488517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.488553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.488900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.488933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.489299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.489332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.489703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.489735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.490083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.490116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.490378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.490417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.490769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.490803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.491151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.491185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.491428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.819 [2024-10-11 12:06:31.491465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.819 qpair failed and we were unable to recover it. 00:29:28.819 [2024-10-11 12:06:31.491845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.491878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.492240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.492273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.492637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.492670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.493022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.493055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.493452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.493485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.493720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.493756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.494084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.494117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.494309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.494342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.494591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.494625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.495004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.495038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.495432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.495466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.495819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.495852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.496218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.496251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.496473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.496507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.496873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.496906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.497287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.497321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.497686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.497721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.498090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.498124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.498393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.498426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.498774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.498807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.499166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.499200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.499558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.499591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.500023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.500056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.500442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.500476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.500827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.500859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:28.820 [2024-10-11 12:06:31.501208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.820 [2024-10-11 12:06:31.501245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:28.820 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.501595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.501629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.501989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.502025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.502389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.502434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.502835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.502883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.503291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.503348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.503660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.503714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.504131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.504189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.504565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.504613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.505009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.505046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.505439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.505473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.505848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.505881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.506243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.506277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.506641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.506673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.507019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.507052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.507308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.507344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.507729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.507762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.508136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.508170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.508529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.508562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.508807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.508841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.509201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.509236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.509598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.509631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.510001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.510034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.510434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.510468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.510828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.510861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.511253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.511287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.511669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.511701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.511955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.511990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.512365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.512406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.512756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.512791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.512950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.512982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.513308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.513341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.513724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.513756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.514011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.514043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.514324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.514356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.514757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.514789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.515145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.515179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.515540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.515573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.515932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.515964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.516220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.516256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.516636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.516669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-10-11 12:06:31.517016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-10-11 12:06:31.517050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.517466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.517500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.517856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.517889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.518247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.518282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.518648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.518680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.519054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.519104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.519488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.519520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.519886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.519918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.520299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.520334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.520684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.520716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.521114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.521160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.521537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.521569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.521867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.521899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.522207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.522241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.522620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.522652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.523006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.523038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.523306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.523340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.523729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.523761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.524010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.524043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.524521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.524555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.524908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.524940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.525313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.525347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.525696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.525729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.526105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.526139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.526540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.526572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.526703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.526738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.527183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.527216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.527576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.527616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.527844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.527877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.528112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.528146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.528538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.528570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.528941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.528975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.529366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.529398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.529642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.529675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.530011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.530043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.530479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.530513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.530879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.530912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.531296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.531329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.531696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.531728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.532094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.532128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.532582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.532615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.533015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.533047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.533428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.533460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.533880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.533912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.534277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.534312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.534637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.534668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.535034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.535077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.535381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.535414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.535763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.535795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.536048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.536109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.536449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.536482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.536824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.536858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.537107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.537140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.537498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.537531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.537886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.537920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.538258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.538291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.538647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.538681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.539016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.539049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.539276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.539311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.539673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.539706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.540059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.540113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.540477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.540509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.540877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.540909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.541173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.541206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-10-11 12:06:31.541581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-10-11 12:06:31.541615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.541851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.541886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.542135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.542169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.542532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.542571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.542925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.542957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.543405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.543439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.543786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.543819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.544076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.544112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.544465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.544497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.544778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.544811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.545173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.545207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.545581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.545613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.545954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.545988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.546243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.546280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.546637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.546670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.547022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.547054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.547424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.547458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.547823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.547856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.548204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.548239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.548598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.548631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.548981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.549013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.549303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.549338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.549694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.549728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.549980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.550012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.550265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.550300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.550678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.550713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.551078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.551112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.551511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.551544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.551901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.551933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.552302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.552338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.552699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.552733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.553089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.553122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.553485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.553519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.553870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.553903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.554271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.554304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.554659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.554692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.555049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.555093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.555459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.555492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.555853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.555886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.556247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.556281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.556643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.556675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.557027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.557061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.557412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.557445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.557826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.557865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.558227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.558261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.558510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.558546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.558811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.558844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.559213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.559249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.559672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.559704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.560084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.560118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.560485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.560518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.560961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.560995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.561353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.561386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.561739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.561773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.562135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.562169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.562547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.562579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.562952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.562985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.563231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.563265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.563612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.563644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.563893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.563926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.564269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.564302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.564655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.564687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.565034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.565074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.565500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.565533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.565883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.565916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.566305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.566340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.566627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.566664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-10-11 12:06:31.566968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-10-11 12:06:31.567003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.567287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.567321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.567571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.567604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.567986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.568022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.568417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.568451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.568791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.568824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.569188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.569222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.569597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.569630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.569990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.570022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.570319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.570353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.570708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.570743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.571091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.571126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.571542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.571575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.571852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.571886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.572247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.572281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.572642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.572675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.573035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.573089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.573464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.573497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.573844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.573877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.574244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.574278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.574612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.574645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.575000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.575033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.575453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.575490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.575844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.575877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.576297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.576332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.576702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.576735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.576971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.577005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.577397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.577432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.577794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.577827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.578198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.578232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.578503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.578539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.578846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.578879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.579227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.579261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.579618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.579652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.579926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.579961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.580298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.580332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.580685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.580719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.581108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.581143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.581520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.581552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.581916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.581949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.582292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.582328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.582729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.582762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.583119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.583152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.583547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.583580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.583871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.583904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.584266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.584301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.584657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.584692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.585026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.585057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.585461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.585493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.585844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.585877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.586124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.586164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.586557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.586591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.586930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.586965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.587302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.587338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.587685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.587718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.588074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.588109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.588460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.588500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.588754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.588789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.589147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.589182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.589454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.589487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.589618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.589653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.589891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.589925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.590370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.590404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.590755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.590788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.591024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.591059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.591518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.591553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.591869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.591904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.592142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.592180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.592554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.592588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.594378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.594439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.594883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.594922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.595318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.595354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.595713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.595748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.596115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-10-11 12:06:31.596150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-10-11 12:06:31.596519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.596553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.596915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.596947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.597297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.597332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.597676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.597711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.598056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.598102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.598364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.598396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.598745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.598781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.599144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.599178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.599415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.599450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.599834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.599869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.600210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.600244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.600605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.600638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.601000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.601033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.601444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.601478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.601834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.601867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.602124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.602160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.602561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.602595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.602976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.603009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.603244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.603281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.603524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.603564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.603955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.603988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.604295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.604330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.604685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.604727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.604965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.605001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.605366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.605400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.605752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.605786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.606014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.606050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.606440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.606474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.606832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.606866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.607267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.607303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.607683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.607717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.608159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.608194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.608567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.608599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.608845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.608879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.609230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.609266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.609636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.609668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.610056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.610104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.610491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.610523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.610889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.610921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.611177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.611213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.611626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.611661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.612024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.612057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.612489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.612523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.612880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.612915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.613283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.613317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.613651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.613685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.613941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.613975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.614337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.614373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.614729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.614763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.615207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.615243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.615617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.615651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.616010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.616042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.616410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.616444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.616814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.616847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.617205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.617240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.617609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.617641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.618022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.618057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.618466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.618501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.618858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.618891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.619252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.619287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.619672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.619705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.620060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.620106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.620488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.620526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.620911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.620944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.621330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.621365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.621730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.621765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.622123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.622157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.622534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.622569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.622863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.622896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.623153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.623186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.623569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.623602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.623940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.623974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.624176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.624208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.624562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.624597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.624943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.624975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.625396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.625429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-10-11 12:06:31.625806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-10-11 12:06:31.625840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.626204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.626238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.626510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.626543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.626762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.626795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.627190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.627225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.627462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.627496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.627854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.627886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.628235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.628268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.628645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.628679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.629046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.629099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.629499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.629531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.629894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.629926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.630208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7f5e0 is same with the state(6) to be set 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Read completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 Write completed with error (sct=0, sc=8) 00:29:29.099 starting I/O failed 00:29:29.099 [2024-10-11 12:06:31.631209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.099 [2024-10-11 12:06:31.631675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.631738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.632032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.632086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.632605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.632711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.633315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.633422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.633865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.633905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.634281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.634318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.634678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.634711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.635081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.635117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.635455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.635489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.635853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.635886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.636249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.636283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.636648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.636680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.637041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.637085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.637467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.637500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.637722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.637761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.638096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.638132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.638519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.638554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.638948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.638982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.639206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.639241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.639627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.639660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.640060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.640117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.640501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.640534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.640885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.640917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.641300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.641335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-10-11 12:06:31.641732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-10-11 12:06:31.641764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.642001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.642034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.642411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.642444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.642800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.642834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.643199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.643233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.643647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.643679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.644084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.644117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.644386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.644423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.644720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.644753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.645120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.645154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.645547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.645582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.645836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.645869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.646222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.646257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.646510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.646544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.646929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.646962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.647398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.647433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.647783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.647817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.648051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.648098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.648448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.648482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.648854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.648887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.649131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.649168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.649550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.649584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.649984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.650018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.650450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.650485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.650866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.650899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.651250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.651285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.651666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.651700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.651947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.651980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.652359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.652393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.652745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.652780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.653234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.653268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.653632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.653665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.653965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.653999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.654359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.654392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.654725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.654758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.654975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.655011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.655299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.655341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.655698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.655731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.656116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.656151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.656457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.656494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-10-11 12:06:31.656896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-10-11 12:06:31.656930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.657334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.657369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.657717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.657751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.658121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.658156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.658577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.658611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.658971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.659003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.659373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.659406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.659759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.659792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.660039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.660082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.660466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.660500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.660865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.660900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.661293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.661328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.661709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.661742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.662097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.662130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.662492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.662526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.662890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.662923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.663132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.663165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.663570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.663603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.663959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.663991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.664335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.664369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.664701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.664733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.665088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.665123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.665514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.665548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.665901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.665935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.666295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.666328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.666687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.666720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.667119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.667152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.667552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.667584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.667878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.667911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.668255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.668289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.668647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.668680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.669051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.669107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.669509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.669542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.669897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.669929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.670303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.670339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.670679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.670713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.671077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.671117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.671486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.671518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.671866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.671900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.672168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.672202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.672622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-10-11 12:06:31.672656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-10-11 12:06:31.673011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.673045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.673308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.673341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.673613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.673647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.673998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.674031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.674378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.674412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.674760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.674793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.675140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.675174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.675439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.675473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.675823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.675856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.676217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.676251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.676625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.676658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.677025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.677058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.677503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.677536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.677888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.677921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.678260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.678293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.678653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.678686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.679049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.679095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.679490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.679523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.679879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.679912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.680254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.680288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.680628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.680660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.680883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.680918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.681204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.681239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.681613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.681644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.681993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.682026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.682373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.682406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.682762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.682795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.683039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.683085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.683503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.683536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.683969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.684001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-10-11 12:06:31.684214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-10-11 12:06:31.684251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.684629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.684662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.684889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.684922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.685192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.685227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.685456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.685491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.685882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.685915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.686166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.686200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.686587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.686620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.686982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.687014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.687405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.687439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.687681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.687717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.688084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.688118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.688487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.688520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.688876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.688908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.689150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.689185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.689540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.689572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.689932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.689965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.690207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.690241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.690632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.690664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.691021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.691054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.691459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.691491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.691855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.691887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.692286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.692320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.692671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.692704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.693093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.693128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.693484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.693517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.693752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.693788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.694006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.694042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.694499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.694532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.694755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.694791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.695131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.695167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.695572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.695604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.695969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.696008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-10-11 12:06:31.696385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-10-11 12:06:31.696419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.696775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.696808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.697150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.697185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.697623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.697656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.697930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.697965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.698292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.698327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.698678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.698711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.698998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.699030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.699398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.699434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.699796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.699829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.700276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.700310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.700661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.700694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.700908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.700943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.701345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.701379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.701766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.701799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.702142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.702178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.702449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.702483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.702912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.702944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.703182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.703217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.703574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.703607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.703990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.704023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.704398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.704433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.704780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.704813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.705184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.705219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.705589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.705622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.705991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.706024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.706405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.706440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.706794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.706827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.707133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.707167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.707598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.707631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.707982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.708015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.708380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.708414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.708708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.708741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.709112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.709147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.709489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.709522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.709944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.709978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.710321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.710355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.710596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.710630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.711014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.711048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-10-11 12:06:31.711442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-10-11 12:06:31.711480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.711822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.711855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.712096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.712130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.712373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.712407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.712655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.712691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.713054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.713101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.713507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.713540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.713905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.713939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.714299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.714334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.714602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.714635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.715010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.715043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.715383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.715417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.715668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.715701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.716079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.716113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.716508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.716542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.716910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.716943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.717280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.717316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.717670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.717705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.718008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.718041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.718407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.718441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.718779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.718813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.719173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.719210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.719619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.719654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.720013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.720047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.720465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.720499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.720742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.720775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.721158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.721193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.721572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.721605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.722002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.722035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.722454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.722489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.722874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.722907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.723152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.723188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.723568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.723602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.723951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.723984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.724319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.724352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.724714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.724748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.725121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.725154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.725424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.725457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.725822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-10-11 12:06:31.725855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-10-11 12:06:31.726253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.726287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.726562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.726601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.726847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.726880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.727140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.727177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.727612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.727645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.727995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.728028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.728428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.728463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.728878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.728911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.729154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.729188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.729415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.729450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.729725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.729759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.730001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.730036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.730325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.730359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.730719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.730752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.731118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.731152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.731545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.731580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.731988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.732021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.732200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.732234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.732584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.732616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.732895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.732928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.733298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.733332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.733741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.733775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.734154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.734188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.734582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.734615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.734968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.735002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.735316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.735350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.735722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.735755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.736110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.736145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.736501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.736535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.736892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.736925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.737383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.737417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.737755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.737788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.738133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.738167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.738388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.738423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.738757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.738790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.739194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.739228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.739603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.739635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.739993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.106 [2024-10-11 12:06:31.740026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.106 qpair failed and we were unable to recover it. 00:29:29.106 [2024-10-11 12:06:31.740406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.740441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.740808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.740841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.741188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.741222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.741585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.741625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.741884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.741919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.742273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.742308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.742687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.742720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.743060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.743107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.743450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.743482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.743832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.743866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.744254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.744289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.744653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.744686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.745037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.745082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.745514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.745548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.745843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.745876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.746133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.746168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.746427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.746461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.746823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.746857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.747181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.747215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.747603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.747637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.748073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.748107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.748467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.748500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.748954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.748987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.749358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.749391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.749766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.749799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.750220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.750255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.750595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.750629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.750981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.751014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.751450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.751484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.751842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.751875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.752235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.752270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.752677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.752710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.753058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.753102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.753478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.753511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.753883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.753916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.754105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.754140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.754524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.754556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.754944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.754976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.755167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.755202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.755535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.107 [2024-10-11 12:06:31.755568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.107 qpair failed and we were unable to recover it. 00:29:29.107 [2024-10-11 12:06:31.755918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.755951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.756219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.756252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.756528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.756561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.756796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.756839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.757203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.757236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.757484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.757517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.757882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.757917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.758144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.758179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.758550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.758585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.759006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.759038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.759333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.759368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.759799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.759833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.760201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.760235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.760574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.760607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.761034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.761075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.761485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.761518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.761955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.761988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.762380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.762415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.762651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.762686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.763052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.763097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.763365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.763398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.763752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.763785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.764160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.764193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.764548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.764581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.764930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.764963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.765327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.765361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.765579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.765614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.765976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.766009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.766441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.766476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.766831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.766864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.767315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.767350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.767701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.767734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.768083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.768116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.768392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.768426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.768779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.768813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.108 qpair failed and we were unable to recover it. 00:29:29.108 [2024-10-11 12:06:31.769187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.108 [2024-10-11 12:06:31.769220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.769591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.769623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.769974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.770006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.770360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.770395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.770556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.770590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.771080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.771114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.771379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.771413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.771663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.771696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.771971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.772010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.772378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.772413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.772766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.772799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.773149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.773183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.773517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.773550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.773788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.773821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.774157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.774192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.774474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.774506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.774914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.774947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.775319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.775353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.775716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.775748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.776129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.776162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.776435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.776468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.776819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.776852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.777257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.777292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.777658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.777691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.778041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.778091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.778469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.778502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.778852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.778885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.779247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.779280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.779650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.779683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.780102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.780137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.780486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.780519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.780879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.780912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.781250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.781285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.781640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.781672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.782027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.782060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.782380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.782413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.782770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.782804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.783090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.783124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.783459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.783491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.109 [2024-10-11 12:06:31.783833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.109 [2024-10-11 12:06:31.783866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.109 qpair failed and we were unable to recover it. 00:29:29.110 [2024-10-11 12:06:31.784256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.110 [2024-10-11 12:06:31.784290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.110 qpair failed and we were unable to recover it. 00:29:29.110 [2024-10-11 12:06:31.784654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.110 [2024-10-11 12:06:31.784687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.110 qpair failed and we were unable to recover it. 00:29:29.110 [2024-10-11 12:06:31.785044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.110 [2024-10-11 12:06:31.785104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.110 qpair failed and we were unable to recover it. 00:29:29.110 [2024-10-11 12:06:31.785490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.110 [2024-10-11 12:06:31.785523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.110 qpair failed and we were unable to recover it. 00:29:29.110 [2024-10-11 12:06:31.785886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.110 [2024-10-11 12:06:31.785920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.110 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.786165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.786203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.786627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.786664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.787050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.787094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.787476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.787516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.787878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.787912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.788376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.788410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.788629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.788662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.789049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.789094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.789442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.789475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.789831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.789864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.790137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.790172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.790542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.790575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.790934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.790966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.791274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.791308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-10-11 12:06:31.791666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-10-11 12:06:31.791700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.792054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.792097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.792344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.792379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.792764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.792798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.793213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.793247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.793628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.793661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.794045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.794092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.794450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.794483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.794839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.794872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.795289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.795324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.795696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.795730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.795970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.796006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.796315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.796349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.796667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.796701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.797060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.797103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.797501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.797534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.797899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.797933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.798212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.798245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.798471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.798506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.798853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.798887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.799249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.799283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.799647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.799680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.800005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.800039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.800417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.800450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.800799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.800832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.801194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.801229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.801492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.801525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.801881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.801914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.802273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.802308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.802671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.802710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.803054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.803100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.803438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.803471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.803733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.803768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.804005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.804037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.804397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.804430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.804805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.804839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.805098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.805132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.805412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.805444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.805862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.805896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.806155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.806189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-10-11 12:06:31.806567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-10-11 12:06:31.806601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.806783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.806819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.807200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.807236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.807479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.807514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.807865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.807899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.808299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.808333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.808581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.808614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.808946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.808980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.809327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.809361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.809720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.809754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.810161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.810196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.810562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.810595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.810940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.810973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.811321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.811356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.811754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.811787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.812021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.812054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.812335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.812369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.812595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.812629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.812875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.812911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.813338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.813373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.813767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.813803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.814102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.814137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.814516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.814551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.814918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.814953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.815317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.815351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.815706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.815739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.816087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.816121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.816491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.816524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.816887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.816921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.817270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.817312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.817674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.817706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.817952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.817987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.818303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.818337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.818706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.818740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.818978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.819012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.819302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.819336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.819566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.819600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.819988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-10-11 12:06:31.820020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-10-11 12:06:31.820392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.820428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.820819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.820853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.821082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.821118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.821501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.821535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.821889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.821923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.822277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.822311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.822694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.822727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.823095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.823152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.823518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.823552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.823900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.823935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.824185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.824219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.824595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.824627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.824982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.825015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.825379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.825413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.825767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.825800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.826146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.826180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.826536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.826571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.826925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.826958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.827319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.827353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.827702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.827736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.827994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.828026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.828283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.828320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.828680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.828715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.829061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.829105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.829539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.829572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.829825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.829860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.830096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.830130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.830362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.830398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.830687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.830721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.831125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.831159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.831537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.831570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.831818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.831858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.832284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.832318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.832680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.832714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.833087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.833120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.833498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.833532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.833779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.833814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.834138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.834172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.834547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.834581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-10-11 12:06:31.834825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-10-11 12:06:31.834859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.835143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.835180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.835561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.835596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.835946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.835983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.836247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.836283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.836633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.836666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.837053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.837122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.837487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.837521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.837722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.837759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.838104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.838139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.838409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.838443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.838921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.838954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.839323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.839359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.839737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.839771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.840119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.840155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.840539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.840573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.840787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.840821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.841101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.841136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.841392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.841425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.841802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.841838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.842189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.842225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.842511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.842544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.842918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.842952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.843317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.843351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.843576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.843611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.843761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.843794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.844175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.844209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.844555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.844589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.844863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.844896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.845242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.845277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.845628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.845661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.845881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.845917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.846291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.846332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.846691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.846724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.847097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.847131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.847550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.847583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.847925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.847958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:06:31.848445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-10-11 12:06:31.848480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.848841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.848876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.849107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.849142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.849504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.849536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.849695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.849730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.850002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.850036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.850289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.850323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.850681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.850716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.850951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.850986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.851248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.851284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.851610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.851643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.851985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.852018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.852437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.852471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.852675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.852708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.853083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.853117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.853492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.853524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.853883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.853915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.854273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.854306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.854640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.854674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.855039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.855085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.855459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.855491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.855728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.855761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.856127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.856163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.856539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.856573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.856966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.857000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.857378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.857413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.857768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.857801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.858217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.858251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.858613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.858646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.859021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.859053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.859371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.859404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.859758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.859790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.860189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.860222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.860581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.860615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.860971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.861005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:06:31.861298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-10-11 12:06:31.861339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.861694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.861728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.861947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.861980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.862333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.862368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.862735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.862768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.862983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.863016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.863348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.863382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.863754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.863788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.864154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.864190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.864557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.864590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.864925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.864957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.865325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.865360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.865742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.865775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.866020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.866053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.866500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.866535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.866913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.866946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.867321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.867355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.867695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.867728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.868090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.868124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.868533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.868566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.868922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.868954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.869334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.869369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.869746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.869779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.870198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.870231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.870600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.870634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.870785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.870818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.871108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.871142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.871525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.871558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.871991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.872025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.872467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.872500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.872910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.872943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.873299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.873333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.873699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.873732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.874084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.874118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.874480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.874513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.874859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.874892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.875343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.875378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-10-11 12:06:31.875721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-10-11 12:06:31.875754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.876109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.876143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.876577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.876610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.876963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.877002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.877167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.877204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.877589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.877622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.877970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.878003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.878447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.878481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.878828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.878861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.879193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.879227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.879612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.879645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.880011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.880044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.880316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.880351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.880691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.880724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.881084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.881117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.881497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.881530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.881921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.881954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.882229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.882263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.882644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.882677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.883100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.883135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.883422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.883458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.883803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.883836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.884203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.884237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.884613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.884646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.884908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.884941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.885295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.885329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.885688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.885721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.886124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.886159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.886431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.886464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.886720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.886755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.887171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.887212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.887494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.887527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.887870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.887903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.888271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.888305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.888554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.888586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.888939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.888971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.889325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.889360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.889727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.889760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.889924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.889956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.890187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.890220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-10-11 12:06:31.890589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-10-11 12:06:31.890623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.890980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.891014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.891267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.891301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.891723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.891756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.891989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.892023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.892389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.892423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.892774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.892807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.893133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.893168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.893424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.893460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.893800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.893833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.894135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.894169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.894566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.894599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.894877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.894910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.895251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.895285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.895661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.895695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.896081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.896114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.896499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.896532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.896853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.896887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.897245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.897279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.897635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.897667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.898026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.898059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.898407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.898439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.898802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.898835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.899181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.899216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.899448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.899482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.899740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.899773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.900121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.900155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.900510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.900542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.900890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.900924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.901167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.901203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.901566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.901605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.901830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.901866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.902219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.902252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.902619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.902651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.902908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.902941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.903103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.903138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.903492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.903526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.903892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-10-11 12:06:31.903925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-10-11 12:06:31.904357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.904390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.904678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.904712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.905087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.905121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.905447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.905480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.905832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.905864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.906130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.906164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.906522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.906555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.906963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.906995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.907380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.907413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.907776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.907810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.908187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.908220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.908596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.908629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.908989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.909021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.909383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.909416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.909850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.909883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.910228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.910262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.910632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.910664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.910907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.910944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.911297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.911332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.911723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.911757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.912108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.912142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.912507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.912540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.912875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.912908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.913155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.913189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.913540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.913573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.913832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.913865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.914244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.914278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.914650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.914683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.915032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.915082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.915341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.915376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.915732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.915766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.916174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-10-11 12:06:31.916208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-10-11 12:06:31.916489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.916528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.916875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.916909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.917140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.917174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.917550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.917584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.917935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.917968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.918326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.918361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.918778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.918810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.919171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.919205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.919632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.919665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.920020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.920052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.920454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.920489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.920926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.920959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.921333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.921366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.921733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.921766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.922146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.922182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.922552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.922584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.922943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.922976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.923320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.923354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.923602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.923635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.924084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.924118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.924478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.924511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.924832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.924866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.925239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.925272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.925631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.925665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.926083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.926117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.926478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.926511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.926864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.926896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.927186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.927220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.927568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.927601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.927962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.927994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.928403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.928437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.928808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.928841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.929210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.929244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.929606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.929638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.929879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.929913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-10-11 12:06:31.930201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-10-11 12:06:31.930234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.930614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.930647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.931040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.931085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.931421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.931453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.931810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.931843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.932210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.932250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.932620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.932653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.933049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.933093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.933456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.933489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.933854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.933887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.934249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.934283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.934518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.934552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.934905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.934938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.935303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.935336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.935701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.935735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.936095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.936130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.936577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.936610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.936958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.936991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.937247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.937280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.937667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.937701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.937856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.937889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.938245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.938279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.938507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.938540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.938923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.938955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.939357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.939390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.939738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.939772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.940179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.940213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.940584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.940617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.940995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.941028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.941440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.941475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.941703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.941739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.942023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.942056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.942353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.942386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.942612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.942645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.943004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.943038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.943451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.943484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.943855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.943888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.944307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.944341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.944748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.944782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.945148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.945182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-10-11 12:06:31.945541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-10-11 12:06:31.945575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.945929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.945962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.946196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.946229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.946578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.946612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.946962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.946995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.947366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.947405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.947607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.947641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.947953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.947986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.948366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.948400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.948784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.948818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.948968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.949001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.949277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.949311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.949542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.949577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.949932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.949964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.950362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.950396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.950740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.950774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.951041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.951094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.951520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.951553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.951954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.951986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.952395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.952430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.952800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.952833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.953061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.953106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.953495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.953528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.953884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.953917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.954090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.954126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.954495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.954528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.954882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.954916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.955305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.955339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.955699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.955732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.956102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.956138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.956537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.956571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.956826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.956859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.957225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.957259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.957608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.957641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.958005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.958039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.958310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.958343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.958690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.958723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.959088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.959122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.959497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.959530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.959872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.959904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.960241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-10-11 12:06:31.960275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-10-11 12:06:31.960631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.960662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.961012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.961044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.961334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.961366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.961732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.961764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.962122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.962163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.962544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.962578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.962927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.962960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.963328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.963361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.963715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.963748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.964111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.964145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.964536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.964569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.964948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.964981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.965346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.965380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.965743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.965775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.966014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.966046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.966410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.966442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.966810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.966843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.967227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.967261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.967626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.967660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.968033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.968078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.968437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.968470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.968810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.968843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.969258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.969293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.969673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.969706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.970094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.970129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.970537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.970570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.970966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.970999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.971151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.971186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.971590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.971623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.971952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.971985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.972234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.972268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.972637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.972670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.973029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.973075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.973458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.973490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.973734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.973766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.974123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.974158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.974514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.974550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.974930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.974962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-10-11 12:06:31.975210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-10-11 12:06:31.975244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 Read completed with error (sct=0, sc=8) 00:29:29.398 starting I/O failed 00:29:29.398 Read completed with error (sct=0, sc=8) 00:29:29.398 starting I/O failed 00:29:29.398 Read completed with error (sct=0, sc=8) 00:29:29.398 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Read completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 Write completed with error (sct=0, sc=8) 00:29:29.399 starting I/O failed 00:29:29.399 [2024-10-11 12:06:31.976046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.399 [2024-10-11 12:06:31.976685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.976797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.977339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.977448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.977884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.977926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.978396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.978508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.978940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.978980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.979359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.979395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.979747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.979781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.980181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.980215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.980588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.980621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.980977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.981011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.981413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.981449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.981811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.981846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.982224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.982260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.982559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.982593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.982840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.982873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.983221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.983255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.983593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.983626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.983992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.984026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.984416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.984451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.984842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.984875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.985234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.985269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.985627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.985659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.986010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.986042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.986472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.986505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.986871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.986903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.987250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-10-11 12:06:31.987286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-10-11 12:06:31.987648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.987683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.987945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.987978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.988371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.988407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.988772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.988804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.989034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.989075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.989431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.989462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.989743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.989780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.990210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.990245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.990606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.990639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.991007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.991041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.991412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.991446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.991823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.991856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.992274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.992308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.992654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.992694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.993049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.993104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.993507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.993541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.993896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.993928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.994303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.994336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.994724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.994757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.994991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.995027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.995412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.995446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.995800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.995835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.996257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.996291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.996553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.996589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.996952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.996986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.997368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.997402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.997757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.997790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.998220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.998254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.998597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.998632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.998976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.999009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.999425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.999458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:31.999806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:31.999840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.000189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.000225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.000613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.000645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.001010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.001044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.001408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.001442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.001781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.001812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.002141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.002176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.002542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.002576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.002939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.002971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-10-11 12:06:32.003330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-10-11 12:06:32.003364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.003721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.003755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.004095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.004130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.004487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.004520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.004867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.004900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.005257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.005290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.005651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.005682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.006041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.006174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.006552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.006585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.006941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.006974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.007315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.007348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.007737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.007770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.008133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.008166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.008530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.008561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.008925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.008958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.009320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.009354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.009705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.009738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.010000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.010032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.010378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.010412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.010762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.010794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.011146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.011180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.011552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.011585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.011951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.011983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.012337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.012370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.012729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.012760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.013121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.013154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.013523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.013556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.013914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.013946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.014361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.014394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.014742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.014774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.015137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.015171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.015545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.015577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.015939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.015975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.016326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.016360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.016723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.016758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.017122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.017156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.017551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.017583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.017932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.017964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.018316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.018349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.018709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.018741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.019108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.019141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-10-11 12:06:32.019515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-10-11 12:06:32.019553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.019922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.019955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.020322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.020355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.020714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.020747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.021127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.021161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.021521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.021552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.021885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.021918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.022289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.022323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.022677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.022709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.023073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.023106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.023467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.023499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.023849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.023881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.024242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.024277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.024633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.024665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.025015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.025049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.025303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.025337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.025577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.025613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.025996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.026028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.026387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.026421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.026776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.026807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.027164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.027198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.027556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.027588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.027950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.027982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.028357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.028390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.028749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.028782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.029144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.029178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.029538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.029571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.029942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.029976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.030336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.030369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.030722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.030753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.031127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.031182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.031562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.031594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.031962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.031996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.032352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.032385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.032740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.032770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.033133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.033165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.033534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.033565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.033920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.033951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.034166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.034200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.034589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.034621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.034995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.035025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-10-11 12:06:32.035410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-10-11 12:06:32.035450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.035802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.035834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.036193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.036227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.036587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.036618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.036982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.037014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.037376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.037409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.037644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.037677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.038079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.038113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.038346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.038377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.038751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.038781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.039137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.039170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.039530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.039560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.039921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.039953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.040208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.040241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.040640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.040671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.041020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.041051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.041454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.041486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.041845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.041875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.042323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.042356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.042710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.042740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.043104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.043138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.043485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.043518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.043872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.043905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.044538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.044579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.044953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.044991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.045352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.045385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.045748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.045781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.046131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.046171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.046528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.046562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.046917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.046950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.047318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.047352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.047746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.047779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.048141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.048174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.048536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.048567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.048902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.048933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.049299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.049333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.049690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.049722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.050077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.050109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.050515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.050546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.050898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.050931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.051292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.051325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-10-11 12:06:32.051718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-10-11 12:06:32.051751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.052124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.052158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.052534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.052566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.052931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.052962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.053326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.053358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.053719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.053750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.054112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.054145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.054513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.054545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.054901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.054933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.055297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.055329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.055698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.055729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.056084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.056117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.056387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.056422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.056809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.056842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.057206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.057240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.057585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.057616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.057972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.058005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.058365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.058398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.058757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.058789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.059150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.059184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.059572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.059604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.059942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.059976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.060335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.060369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.060724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.060757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.061105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.061137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.061542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.061574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.061921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.061954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.062323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.062363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.062744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.062776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.063130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.063162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.063536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.063569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.063913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.063947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.064316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.064350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.064707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.064740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-10-11 12:06:32.065096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-10-11 12:06:32.065129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.065516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.065547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.065899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.065931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.066306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.066338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.066672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.066703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.067056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.067112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.067494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.067527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.067919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.067952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.068320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.068355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.068724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.068756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.069140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.069175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.069532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.069563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.069925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.069956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.070327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.070360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.070713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.070746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.070992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.071022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.071413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.071445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.071816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.071850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.072099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.072133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.072502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.072532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.072914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.072951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.073213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.073245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.073626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.073658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.074061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.074104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.074496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.074528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.074942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.074973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.075394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.075428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-10-11 12:06:32.075778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-10-11 12:06:32.075809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.076163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.076201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.076565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.076599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.076961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.076993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.077352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.077386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.077738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.077771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.077980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.078013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.078386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.078420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.078781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.078815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.079171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.079205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.079560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.079593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.079850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.079882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.080140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.080172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.679 [2024-10-11 12:06:32.080546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.679 [2024-10-11 12:06:32.080578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.679 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.080935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.080967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.081205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.081242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.081621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.081654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.082013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.082047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.082397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.082430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.082794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.082826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.083105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.083138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.083397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.083430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.083775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.083807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.084166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.084201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.084444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.084475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.084828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.084861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.085224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.085256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.085614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.085647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.085986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.086018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.086381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.086414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.086771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.086811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.087165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.087199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.087559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.087592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.087952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.087984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.088339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.088378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.088732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.088766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.089131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.089165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.089522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.089554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.089912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.089942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.090312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.090345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.090698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.090730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.091086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.091119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.091484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.091516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.091866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.091899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.092130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.092162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.092530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.092562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.092929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.092961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.093352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.093385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.093736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.093770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.094129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.094162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.094555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.094588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.094938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.094971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.680 qpair failed and we were unable to recover it. 00:29:29.680 [2024-10-11 12:06:32.095334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.680 [2024-10-11 12:06:32.095368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.095723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.095755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.096089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.096123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.096479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.096512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.096950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.096982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.097327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.097361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.097716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.097749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.098108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.098141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.098517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.098549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.098914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.098954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.099295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.099331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.099683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.099714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.100080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.100114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.100464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.100495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.100865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.100898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.101324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.101358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.101744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.101777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.102124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.102159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.102536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.102569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.102920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.102953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.103320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.103352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.103722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.103753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.104006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.104037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.104435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.104468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.104828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.104859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.105218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.105252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.105630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.105663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.107682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.107745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.108144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.108180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.108432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.108469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.108852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.108887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.109254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.109288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.109526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.109562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.109917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.109949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.110371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.110405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.110755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.110787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.111183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.111216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.111599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.111632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.111987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.112025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.112435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.112486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.112859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.681 [2024-10-11 12:06:32.112913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.681 qpair failed and we were unable to recover it. 00:29:29.681 [2024-10-11 12:06:32.113363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.113426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.113835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.113891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.114306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.114366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.114781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.114841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.115262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.115318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.115786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.115839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.116296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.116353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.116761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.116818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.117229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.117284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.117690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.117754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.118095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.118147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.118561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.118611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.119003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.119052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.119525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.119575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.119944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.119995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.120409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.120459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.120860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.120911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.121342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.121395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.121779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.121828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.122223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.122275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.122676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.122724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.123127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.123178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.123563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.123612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.123994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.124043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.124416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.124466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.124863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.124915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.125308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.125359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.125759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.125809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.126218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.126271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.126709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.126758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.127183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.127234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.127593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.127644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.128039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.128100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.128499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.128548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.128945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.128994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.129396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.129446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.129843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.129894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.130296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.130349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.130749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.130800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.131209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.131259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.682 [2024-10-11 12:06:32.131680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.682 [2024-10-11 12:06:32.131730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.682 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.132135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.132186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.132546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.132583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.132934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.132968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.133336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.133366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.133741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.133774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.134122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.134156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.134514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.134549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.134904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.134937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.135296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.135331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.135692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.135726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.136087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.136122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.136476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.136510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.136876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.136912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.137278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.137313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.137671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.137707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.137947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.137984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.138338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.138376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.138730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.138765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.139126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.139162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.139527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.139564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.139916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.139951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.140319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.140353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.140582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.140615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.140860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.140894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.141192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.141228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.141568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.141602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.141826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.141858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.142228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.142263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.142616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.142640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.142971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.142995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.143337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.143361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.143699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.143725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.144081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.144106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.144465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.144490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.144848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.144872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.145211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.145236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.145593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.145623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.145994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.146020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.683 qpair failed and we were unable to recover it. 00:29:29.683 [2024-10-11 12:06:32.146394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.683 [2024-10-11 12:06:32.146421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.146629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.146656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.147007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.147033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.147390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.147416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.147767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.147794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.148149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.148175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.148553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.148579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.148934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.148959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.149293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.149317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.149727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.149751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.150117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.150144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.150524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.150550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.150977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.151002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.151365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.151389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.151734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.151759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.151987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.152011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.152353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.152380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.152676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.152700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.153094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.153119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.153474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.153494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.153818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.153837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.154174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.154193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.154533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.154553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.154890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.154909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.155235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.155255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.155598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.155618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.156004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.156024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.156271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.156291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.156649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.156669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.157010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.157029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.157367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.157388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.157731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.157750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.158135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.158156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.158388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.158408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.158822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.158843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.159140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.159161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.159501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-10-11 12:06:32.159522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-10-11 12:06:32.159862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.159883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.160117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.160136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.160506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.160526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.160870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.160888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.161193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.161238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.161590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.161632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.162014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.162060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.162440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.162490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.162718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.162758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.163102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.163143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.163383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.163401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.163731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.163746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.164088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.164102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.164286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.164300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.164639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.164652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.165000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.165012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.165342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.165355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.165738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.165752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.166077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.166093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.166434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.166448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.166789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.166801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.167122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.167136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.167476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.167490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.167837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.167851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.168099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.168115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.168426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.168441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.168790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.168805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.169150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.169163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.169484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.169498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.169843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.169859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.170208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.170223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.170565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.170578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.170899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.170912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.171270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.171284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.171617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.171632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.171980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.171993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.172320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.172335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.172573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.172587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.172770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.172783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.173128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.173143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.173430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-10-11 12:06:32.173444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-10-11 12:06:32.173792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.173804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.174131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.174145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.174366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.174379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.174697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.174712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.175061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.175082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.175425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.175439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.175784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.175796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.176189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.176203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.176529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.176543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.176885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.176898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.177248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.177261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.177604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.177618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.177983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.177995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.178347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.178360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.178687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.178700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.179045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.179058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.179415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.179428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.179789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.179803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.180134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.180149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.180495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.180508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.180859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.180915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.181302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.181351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.181732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.181786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.182167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.182199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.182560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.182573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.182921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.182935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.183288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.183301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.183644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.183657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.183984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.183996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.184307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.184326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.184667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.184680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.185025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.185048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.185404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.185419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.185762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.185778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.186124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.186140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.186482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.186497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.186843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.186858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.187202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.187218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.187551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.187566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.187911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.187927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-10-11 12:06:32.188271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-10-11 12:06:32.188289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.188519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.188537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.188866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.188883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.189233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.189250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.189590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.189605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.189937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.189953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.190214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.190247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.190623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.190670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.191054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.191112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.191510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.191537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.191886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.191906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.192244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.192261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.192614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.192629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.192933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.192949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.193343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.193360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.193704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.193720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.194098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.194120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.194475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.194490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.194742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.194759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.195111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.195127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.195531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.195547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.195885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.195902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.196100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.196124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.196346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.196366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.196698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.196717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.197076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.197098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.197440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.197458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.197791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.197811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.198139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.198160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.198540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.198560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.198892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.198911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.199246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.199267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.199621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.199642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.199968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.199987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.200331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.200351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.200695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.200715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.201079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.201099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.201441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.201461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.201792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.201813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.202144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.202165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.202513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.202531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-10-11 12:06:32.202908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-10-11 12:06:32.202927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.203133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.203153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.203482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.203502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.203844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.203863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.204091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.204113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.204485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.204505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.204719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.204738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.205079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.205100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.205432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.205451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.205795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.205814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.206100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.206129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.206462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.206482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.206825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.206848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.207224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.207249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.207484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.207510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.207856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.207881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.208253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.208282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.208640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.208664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.209011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.209035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.209339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.209364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.209727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.209752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.210104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.210129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.210502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.210527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.210899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.210922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.211256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.211279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.211623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.211647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.212019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.212046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.212420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.212446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.212810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.212833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.213084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.213109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.213468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.213492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.213851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.213875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.214230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.214254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.214498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.214523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.214850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.214873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.215218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.215243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-10-11 12:06:32.215605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-10-11 12:06:32.215629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.215994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.216017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.216365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.216389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.216727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.216752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.217089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.217116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.217483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.217508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.217850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.217882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.218237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.218277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.218628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.218659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.219014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.219045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.219422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.219455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.219811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.219844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.220212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.220243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.220599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.220631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.220987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.221017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.221323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.221356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.221711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.221742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.222095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.222128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.222527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.222559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.222906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.222939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.223310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.223342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.223687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.223718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.224095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.224127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.224501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.224534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.224887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.224919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.225295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.225330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.225678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.225709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.226093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.226126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.226479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.226512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.226862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.226892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.227254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.227288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.227645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.227675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.228045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.228087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.228442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.228473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.228831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.228862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.229145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.229178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.229550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.229582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.229940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.229970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.230334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.230367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.230721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.230755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.231127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-10-11 12:06:32.231162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-10-11 12:06:32.231519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.231550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.231914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.231945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.232319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.232351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.232711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.232741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.233095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.233128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.233503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.233536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.233875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.233907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.234212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.234250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.234629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.234661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.235010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.235041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.235419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.235452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.235812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.235842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.236207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.236242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.236593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.236623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.236852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.236887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.237245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.237278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.237711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.237743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.238093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.238124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.238500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.238531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.238886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.238916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.239274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.239307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.239666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.239697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.240045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.240089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.240436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.240468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.240827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.240859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.241218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.241251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.241613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.241643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.241997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.242027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.242427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.242460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.242812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.242841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.243203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.243236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.243567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.243598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.243941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.243972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.244318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.244353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.244697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.244729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.245094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.245126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.245476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.245509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.245869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.245899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.246272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.246307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.246655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.246688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-10-11 12:06:32.247043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-10-11 12:06:32.247102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.247461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.247494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.247845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.247877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.248229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.248260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.248621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.248653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.249010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.249041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.249390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.249422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.249786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.249818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.250193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.250228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.250603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.250633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.250893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.250924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.251285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.251318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.251661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.251692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.252043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.252083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.252417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.252449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.252845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.252879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.253271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.253303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.253696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.253729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.253965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.253999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.254389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.254424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.254768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.254800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.255163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.255195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.255553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.255585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.255822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.255855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.256152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.256185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.256535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.256568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.256975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.257006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.257422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.257454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.257809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.257840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.258209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.258243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.258486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.258520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.258874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.258906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.259250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.259282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.259620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.259652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.260020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.260050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.260264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.260303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.260702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.260734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.261081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.261115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.261504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.261537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.261892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.261924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.262304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-10-11 12:06:32.262338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-10-11 12:06:32.262724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.262756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.263092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.263124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.263494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.263526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.263878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.263909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.264316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.264348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.264700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.264732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.265078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.265112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.265370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.265401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.265764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.265796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.266149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.266183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.266537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.266569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.266924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.266957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.267312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.267345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.267696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.267727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.268087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.268120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.268467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.268496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.268848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.268881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.269227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.269261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.269626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.269658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.270008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.270038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.270402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.270436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.270796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.270827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.271186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.271220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.271574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.271605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.271964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.271996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.272360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.272391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.272743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.272774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.273128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.273159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.273542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.273573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.273930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.273961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.274225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.274258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.274613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.274643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.274990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.275022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.275409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.275443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.275664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.275699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.275926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.275961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-10-11 12:06:32.276321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-10-11 12:06:32.276355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.276695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.276725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.277085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.277118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.277468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.277499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.277863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.277894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.278256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.278288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.278644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.278674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.279028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.279058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.279424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.279456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.279808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.279840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.280204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.280236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.280613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.280644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.280992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.281026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.281310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.281343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.281701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.281733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.282099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.282132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.282489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.282521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.282874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.282906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.283243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.283275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.283624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.283654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.284017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.284048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.284393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.284428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.284786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.284820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.285182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.285214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.285572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.285604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.285956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.285988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.286348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.286387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.286778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.286809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.287178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.287211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.287557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.287590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.287942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.287973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.288331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.288364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.288729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.288763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.289121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.289153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.289500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.289532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.289782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.289816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.290169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.290201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.290555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.290587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.290947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.290981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.291321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.291353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.291713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-10-11 12:06:32.291746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-10-11 12:06:32.292102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.292136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.292486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.292519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.292874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.292906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.293271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.293304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.293647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.293679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.294038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.294091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.294406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.294437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.294798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.294832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.295188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.295221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.295587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.295619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.295986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.296018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.296386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.296417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.296760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.296791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.297141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.297174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.297412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.297443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.297808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.297840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.298199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.298233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.298574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.298605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.298955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.298986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.299337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.299367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.299728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.299759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.300115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.300146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.300524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.300556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.300916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.300946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.301309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.301342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.301700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.301730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.302098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.302138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.302505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.302537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.302887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.302917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.303334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.303366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.303720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.303751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.304130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.304162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.304519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.304550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.304908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.304939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.305313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.305348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.305708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.305740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.306105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.306138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.306532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.306563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.306912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.306944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.307305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.307340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-10-11 12:06:32.307734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-10-11 12:06:32.307769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.308123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.308157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.308507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.308540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.308897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.308932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.309291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.309326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.309679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.309714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.310103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.310138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.310380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.310414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.310762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.310797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.311195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.311230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.311587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.311621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.311983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.312018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.312349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.312384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.312765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.312805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.313141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.313177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.313559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.313594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.313954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.313989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.314329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.314363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.314746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.314783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.315178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.315212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.315571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.315605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.315951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.315984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.316235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.316272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.316654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.316688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.317113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.317149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.317566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.317600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.317965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.317998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.318351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.318388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.318734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.318768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.319118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.319152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.319516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.319549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.319903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.319937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.320277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.320312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.320697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.320729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.321090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.321125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.321485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.321520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.321876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.321909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.322356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.322391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.322817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.322850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.323200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.323233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.323475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-10-11 12:06:32.323511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-10-11 12:06:32.323873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.323905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.324244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.324277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.324638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.324671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.324883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.324913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.325287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.325321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.325696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.325729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.326089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.326123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.326504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.326537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.326896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.326928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.327314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.327348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.327701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.327732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.327956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.327987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.328366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.328401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.328752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.328791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.329135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.329170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.329575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.329608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.329832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.329863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.330219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.330252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.330601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.330634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.330994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.331028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.331478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.331512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.331873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.331907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.332272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.332305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.332667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.332699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.333057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.333102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.333464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.333495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.333852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.333885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.334257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.334291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.334643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.334676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.335021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.335055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.335481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.335514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.335879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.335912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.336256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.336291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.336619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.336653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.336996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.337030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.337404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.337438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.337796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.337830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.338087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.338124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.338498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.338530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-10-11 12:06:32.338882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-10-11 12:06:32.338915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.339155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.339193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.339559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.339591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.339951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.339984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.340348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.340379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.340758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.340788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.341138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.341171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.341540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.341572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.341936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.341969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.342328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.342361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.342712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.342745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.343093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.343146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.343520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.343554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.343906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.343938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.344264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.344298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.344645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.344678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.345039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.345082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.345464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.345496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.345856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.345889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.346238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.346271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.346610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.346643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.346995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.347029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.347417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.347451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.347816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.347851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.348188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.348223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.348584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.348617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.348969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.349004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.349380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.349413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.349764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.349796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.350096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.350130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.350458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.350493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.350843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.350877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.351234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.351269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.351621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.351655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.352013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.352046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.352442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.352477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.352837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.352870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-10-11 12:06:32.353195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-10-11 12:06:32.353229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.353597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.353630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.354032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.354076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.354459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.354492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.354850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.354884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.355149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.355190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.355580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.355614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.355989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.356021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.356270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.356305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.356684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.356718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.356943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.356979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.357333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.357367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.357724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.357756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.358111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.358143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.358479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.358510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.358757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.358793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.359172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.359206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.359571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.359603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.359959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.359992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.360403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.360435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.360789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.360822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.361178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.361213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.361570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.361601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.361967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.361999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.362253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.362286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.362647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.362678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.363085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.363118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.363361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.363395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.363754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.363787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.364137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.364169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.364533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.364565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.364926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.364959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.365321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.365361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.365712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.365745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.366111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.366145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.366503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.366535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.366889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.366922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.367263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.367295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.367556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.367588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.367926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.367959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.368321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.368355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-10-11 12:06:32.368732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-10-11 12:06:32.368763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.699 [2024-10-11 12:06:32.369126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-10-11 12:06:32.369158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-10-11 12:06:32.369542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-10-11 12:06:32.369574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-10-11 12:06:32.369947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-10-11 12:06:32.369979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-10-11 12:06:32.370352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-10-11 12:06:32.370386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-10-11 12:06:32.370789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-10-11 12:06:32.370821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.972 [2024-10-11 12:06:32.371183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.972 [2024-10-11 12:06:32.371216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.972 qpair failed and we were unable to recover it. 00:29:29.972 [2024-10-11 12:06:32.371580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.972 [2024-10-11 12:06:32.371615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.972 qpair failed and we were unable to recover it. 00:29:29.972 [2024-10-11 12:06:32.371969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.972 [2024-10-11 12:06:32.372000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.972 qpair failed and we were unable to recover it. 00:29:29.972 [2024-10-11 12:06:32.372367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.972 [2024-10-11 12:06:32.372402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.972 qpair failed and we were unable to recover it. 00:29:29.972 [2024-10-11 12:06:32.372754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.372785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.373141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.373176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.373604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.373637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.373985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.374018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.374362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.374394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.374735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.374769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.375117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.375151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.375504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.375537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.375894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.375928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.376265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.376298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.376655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.376685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.377044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.377088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.377437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.377469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.377828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.377858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.378217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.378251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.378571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.378602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.378941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.378973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.379334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.379367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.379716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.379750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.380108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.380141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.380497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.380530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.380893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.380925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.381298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.381337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.381685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.381717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.382056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.382101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.382481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.382512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.382877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.382909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.383271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.383303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.383660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.383690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.383938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.383969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.384326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.384357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.384736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.384769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.385128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.385159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.385529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.385561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.385929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.385959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.386321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.386355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.386754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.386788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.387139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.387172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.387534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.387565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.387926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.973 [2024-10-11 12:06:32.387957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.973 qpair failed and we were unable to recover it. 00:29:29.973 [2024-10-11 12:06:32.388314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.388346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.388700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.388731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.389101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.389133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.389499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.389531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.389891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.389921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.390289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.390322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.390695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.390727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.391091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.391124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.391477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.391511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.391876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.391908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.392254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.392289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.392637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.392669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.393024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.393054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.393427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.393459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.393824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.393857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.394208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.394241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.394570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.394600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.394827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.394859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.395240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.395272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.395641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.395672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.396016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.396046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.396461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.396493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.396857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.396888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.397245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.397278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.397648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.397681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.398036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.398082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.398461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.398494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.398854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.398887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.399285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.399318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.399552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.399581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.399933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.399964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.400318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.400350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.400711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.400742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.401101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.401133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.401492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.974 [2024-10-11 12:06:32.401524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.974 qpair failed and we were unable to recover it. 00:29:29.974 [2024-10-11 12:06:32.401872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.401902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.402224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.402257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.402610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.402640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.402965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.402997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.403354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.403387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.403747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.403779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.404152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.404186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.404563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.404594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.404956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.404987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.405256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.405288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.405522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.405557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.405796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.405831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.406177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.406212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.406574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.406604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.406964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.406995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.407399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.407438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.407785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.407818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.408167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.408200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.408556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.408587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.408952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.408983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.409380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.409415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.409761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.409794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.410151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.410184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.410538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.410570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.410865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.410896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.411248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.411281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.411648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.411678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.412041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.412081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.412431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.412461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.412826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.412858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.413217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.413249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.413613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.413644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.414006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.414036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.414399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.414431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.414801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.414834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.415194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.415228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.415587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.415616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.415988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.416019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.416382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.416415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.416779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.416811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.975 [2024-10-11 12:06:32.417149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.975 [2024-10-11 12:06:32.417181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.975 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.417535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.417566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.417910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.417941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.418311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.418345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.418687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.418719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.419073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.419107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.419447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.419478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.419852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.419883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.420243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.420275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.420633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.420665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.421028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.421059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.421410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.421442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.421801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.421832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.422192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.422226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.422578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.422609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.423055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.423096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.423450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.423486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.423687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.423722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.424144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.424177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.424537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.424568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.424920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.424951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.425295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.425328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.425719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.425752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.426103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.426136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.426504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.426537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.426781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.426813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.427203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.427234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.427566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.427599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.427823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.427856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.428231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.428264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.428617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.428648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.429004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.429035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.429387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.429421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.429774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.429805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.430168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.430200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.430561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.430592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.430950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.430980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.431381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.431415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.431766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.431796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.432157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.432191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.432543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.432574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.976 qpair failed and we were unable to recover it. 00:29:29.976 [2024-10-11 12:06:32.432929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.976 [2024-10-11 12:06:32.432960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.433336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.433367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.433742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.433781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.434020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.434056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.434438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.434471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.434833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.434865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.435237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.435271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.435623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.435654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.436043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.436087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.436430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.436462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.436818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.436848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.437219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.437250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.437506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.437537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.437890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.437922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.438292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.438324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.438670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.438701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.439055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.439100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.439450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.439481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.439855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.439885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.440240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.440274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.440523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.440553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.440939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.440972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.441332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.441366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.441689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.441721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.442085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.442120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.442475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.442506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.442863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.442893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.443242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.443274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.443633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.443663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.444019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.444051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.444444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.444477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.444828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.444859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.445224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.445256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.445615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.977 [2024-10-11 12:06:32.445647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.977 qpair failed and we were unable to recover it. 00:29:29.977 [2024-10-11 12:06:32.445901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.445934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.446314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.446347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.446705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.446736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.447093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.447138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.447486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.447518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.447869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.447901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.448230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.448261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.448643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.448675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.449012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.449042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.449371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.449412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.449759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.449789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.450142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.450176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.450535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.450566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.450928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.450958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.451205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.451239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.451613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.451645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.452005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.452036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.452399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.452433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.452787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.452818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.453133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.453166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.453526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.453557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.453910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.453941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.454302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.454336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.454699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.454733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.455141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.455175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.455521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.455551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.455919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.455950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.456299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.456330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.456696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.456728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.457081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.457113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.457290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.457322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.457715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.457747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.458110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.458142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.458492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.458523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.458893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.458923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.459296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.459327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.459681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.459717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.460070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.460104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.460451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.460481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.460838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.978 [2024-10-11 12:06:32.460870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.978 qpair failed and we were unable to recover it. 00:29:29.978 [2024-10-11 12:06:32.461225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.461257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.461620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.461652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.462017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.462046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.462434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.462467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.462823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.462853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.463221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.463256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.463619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.463651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.464098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.464132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.464477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.464509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.464896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.464927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.465211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.465243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.465600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.465632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.465990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.466020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.466429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.466461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.466796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.466827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.467130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.467163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.467557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.467591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.467820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.467852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.468244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.468275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.468674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.468707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.469056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.469096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.469464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.469496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.469864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.469894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.470303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.470336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.470734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.470766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.471124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.471159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.471539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.471571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.471919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.471950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.472312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.472344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.472702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.472733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.473090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.473122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.473476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.473507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.473758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.473790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.474143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.474174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.474548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.474579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.474938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.474968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.475334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.475367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.475719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.475756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.476112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.476144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.476496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.476529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.476884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.979 [2024-10-11 12:06:32.476916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.979 qpair failed and we were unable to recover it. 00:29:29.979 [2024-10-11 12:06:32.477255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.477286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.477656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.477687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.478047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.478098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.478426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.478458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.478806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.478836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.479196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.479229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.479585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.479616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.479972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.480003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.480358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.480390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.480746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.480778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.481145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.481178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.481534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.481564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.481920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.481950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.482312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.482344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.482598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.482629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.482979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.483012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.483371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.483403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.483747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.483780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.484136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.484169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.484531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.484563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.484916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.484949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.485270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.485303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.485656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.485686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.486038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.486092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.486466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.486497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.486859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.486890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.487246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.487279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.487636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.487666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.488031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.488073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.488305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.488336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.488703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.488733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.489091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.489123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.489482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.489513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.489876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.489906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.490272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.490304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.490652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.490682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.491039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.491079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.491471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.491505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.491843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.491875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.492193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.492224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.492584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.980 [2024-10-11 12:06:32.492616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.980 qpair failed and we were unable to recover it. 00:29:29.980 [2024-10-11 12:06:32.492956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.492986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.493361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.493394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.493740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.493770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.494130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.494163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.494408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.494441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.494795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.494828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.495180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.495212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.495567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.495598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.495968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.495998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.496371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.496403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.496764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.496794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.497035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.497082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.497511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.497543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.497893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.497926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.498348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.498381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.498743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.498777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.499130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.499164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.499519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.499549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.499905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.499935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.500298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.500330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.500734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.500765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.501124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.501156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.501408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.501443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.501803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.501840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.502199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.502233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.502595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.502626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.502993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.503024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.503420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.503452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.503814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.503846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.504228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.504260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.504411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.504444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.504859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.504891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.505268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.505300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.505543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.505577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.505945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.981 [2024-10-11 12:06:32.505976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.981 qpair failed and we were unable to recover it. 00:29:29.981 [2024-10-11 12:06:32.506335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.506368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.506726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.506757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.507107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.507140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.507476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.507506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.507868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.507899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.508256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.508291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.508641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.508674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.509028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.509058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.509418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.509451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.509811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.509841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.510085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.510120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.510492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.510523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.510882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.510912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.511284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.511317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.511677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.511708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.512074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.512107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.512492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.512524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.512881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.512913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.513280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.513314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.513665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.513695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.514039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.514079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.514445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.514477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.514857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.514889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.515253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.515285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.515655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.515690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.516076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.516110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.516449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.516481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.516819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.516849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.517204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.517237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.517587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.517618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.517977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.518008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.518382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.518416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.518812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.518845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.519199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.519231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.519588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.519621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.519977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.520011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.520405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.520438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.520795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.520825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.521181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.521214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.521580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.521610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.521963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.982 [2024-10-11 12:06:32.521995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.982 qpair failed and we were unable to recover it. 00:29:29.982 [2024-10-11 12:06:32.522361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.522395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.522747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.522780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.523151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.523183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.523539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.523570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.523932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.523963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.524319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.524352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.524594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.524628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.524985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.525017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.525249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.525281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.525636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.525668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.526027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.526057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.526446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.526478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.526837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.526866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.527213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.527246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.527569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.527600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.527825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.527864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.528217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.528249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.528613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.528645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.529019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.529049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.529437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.529469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.529831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.529865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.530231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.530265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.530619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.530652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.531014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.531046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.531386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.531417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.531792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.531825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.532181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.532214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.532582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.532615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.532977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.533009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.533358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.533392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.533747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.533780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.534138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.534173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.534529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.534560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.534916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.534946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.535316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.535348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.535719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.535749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.536096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.536128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.536493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.536523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.536772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.536806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.537154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.537186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.537547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.983 [2024-10-11 12:06:32.537578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.983 qpair failed and we were unable to recover it. 00:29:29.983 [2024-10-11 12:06:32.537930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.537960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.538321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.538353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.538713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.538744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.539117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.539150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.539506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.539537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.539888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.539920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.540326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.540360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.540698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.540730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.541079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.541111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.541477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.541510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.541759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.541790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.542140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.542172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.542524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.542555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.542919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.542950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.543316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.543350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.543622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.543659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.543837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.543868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.544230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.544264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.544610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.544640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.544952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.544985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.545347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.545380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.545724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.545754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.545986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.546021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.546440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.546475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.546831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.546863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.547215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.547249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.547607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.547637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.548004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.548036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.548362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.548393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.548758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.548790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.549144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.549176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.549586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.549618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.549967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.549998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.550367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.550400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.550760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.550790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.551182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.551215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.551565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.551597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.551842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.551873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.552218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.552251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.552684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.552717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.984 qpair failed and we were unable to recover it. 00:29:29.984 [2024-10-11 12:06:32.553079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.984 [2024-10-11 12:06:32.553111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.553468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.553501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.553853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.553889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.554243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.554275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.554676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.554708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.555099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.555131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.555409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.555443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.555788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.555821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.556173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.556206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.556559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.556589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.556825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.556860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.557229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.557261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.557652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.557684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.558039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.558083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.558430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.558463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.558804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.558836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.559159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.559195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.559566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.559600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.559835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.559870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.560232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.560266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.560629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.560661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.561020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.561054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.561458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.561491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.561847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.561880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.562246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.562278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.562638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.562671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.563034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.563077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.563453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.563485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.563850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.563884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.564246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.564279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.564561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.564593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.564940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.564971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.565316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.565350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.565698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.565731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.566093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.566127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.566504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.566536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.566889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.566920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.567285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.567317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.567664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.567695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.568053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.568101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.568462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.985 [2024-10-11 12:06:32.568492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.985 qpair failed and we were unable to recover it. 00:29:29.985 [2024-10-11 12:06:32.568857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.568888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.569138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.569170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.569519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.569557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.569923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.569955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.570344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.570378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.570724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.570756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.571124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.571159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.571523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.571555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.571919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.571953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.572315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.572348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.572701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.572734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.573100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.573133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.573503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.573535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.573896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.573928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.574296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.574330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.574690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.574722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.575095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.575130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.575482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.575517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.575871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.575902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.576287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.576321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.576661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.576695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.577047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.577094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.577475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.577507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.577871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.577904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.578277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.578310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.578675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.578709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.579059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.579102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.579466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.579499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.579854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.579888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.580251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.580289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.580646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.580680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.580912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.580947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.581167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.581202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.581454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.581491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.581855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.581888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.582255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.986 [2024-10-11 12:06:32.582290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.986 qpair failed and we were unable to recover it. 00:29:29.986 [2024-10-11 12:06:32.582663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.582695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.583049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.583107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.583463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.583494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.583889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.583922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.584283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.584319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.584719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.584752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.585102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.585135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.585518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.585550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.585909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.585942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.586313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.586348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.586721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.586753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.587108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.587142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.587517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.587549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.587907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.587939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.588196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.588230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.588602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.588637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.588986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.589018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.589460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.589495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.589871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.589904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.590289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.590323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.590672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.590705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.591093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.591127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.591486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.591518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.591880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.591910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.592279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.592311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.592667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.592701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.593058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.593100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.593450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.593481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.593844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.593879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.594241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.594274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.594627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.594662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.595014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.595047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.595323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.595355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.595722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.595755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.596121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.596161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.596498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.596529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.596877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.596910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.597287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.597323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.597677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.597710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.598082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.598115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.598465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.987 [2024-10-11 12:06:32.598496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.987 qpair failed and we were unable to recover it. 00:29:29.987 [2024-10-11 12:06:32.598854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.598884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.599240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.599272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.599605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.599639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.600000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.600032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.600389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.600424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.600786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.600820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.601177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.601211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.601594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.601626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.601942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.601973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.602325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.602357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.602723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.602756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.603109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.603143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.603542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.603575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.603952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.603984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.604339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.604372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.604728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.604759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.604991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.605024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.605316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.605349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.605694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.605727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.606084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.606118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.606486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.606524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.606875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.606907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.607277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.607310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.607662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.607696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.608048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.608092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.608443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.608475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.608830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.608862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.609216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.609250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.609584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.609616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.609971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.610004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.610381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.610415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.610847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.610879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.611225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.611260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.611618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.611649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.611898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.611932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.612297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.612329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.612702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.612734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.613096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.613127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.613493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.988 [2024-10-11 12:06:32.613525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.988 qpair failed and we were unable to recover it. 00:29:29.988 [2024-10-11 12:06:32.613883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.613920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.614296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.614329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.614686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.614717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.614971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.615004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.615392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.615427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.615785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.615818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.616173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.616207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.616579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.616612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.616969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.617002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.617481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.617516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.617865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.617898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.618233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.618265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.618517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.618557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.618927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.618962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.619297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.619330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.619709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.619742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.620107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.620141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.620419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.620449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.620724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.620755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.621133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.621170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.621529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.621561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.621784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.621823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.622233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.622281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.622636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.622669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.623047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.623101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.623332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.623364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.623720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.623752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.624118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.624153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.624523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.624555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.624913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.624945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.625285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.625317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.625675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.625707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.626075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.626109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.626460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.626493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.626844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.626876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.627236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.627270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.627671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.627703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.628084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.628117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.628465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.628498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.628853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.628884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.629241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.989 [2024-10-11 12:06:32.629276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.989 qpair failed and we were unable to recover it. 00:29:29.989 [2024-10-11 12:06:32.629635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.629667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.630025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.630059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.630402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.630434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.630787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.630821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.631186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.631218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.631578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.631612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.631966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.631996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.632357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.632389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.632756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.632797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.633144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.633179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.633536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.633568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.633930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.633963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.634329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.634362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.634709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.634742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.635098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.635133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.635493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.635525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.635884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.635917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.636249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.636281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.636669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.636700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.637074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.637109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.637470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.637504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.637861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.637892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.638153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.638189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.638533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.638565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.638912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.638945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.639319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.639352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.639713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.639745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.640099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.640133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.640392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.640423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.640774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.640808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.641173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.641206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.641561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.641596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.641952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.641986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.642339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.642374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.642732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.642765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.643129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.990 [2024-10-11 12:06:32.643161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.990 qpair failed and we were unable to recover it. 00:29:29.990 [2024-10-11 12:06:32.643542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.643576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.643924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.643956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.644326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.644360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.644603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.644634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.644987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.645019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.645375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.645409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.645762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.645796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.646140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.646173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.646523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.646553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.646920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.646951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.647305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.647338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.647694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.647727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.648085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.648124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.648469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.648507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.648862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.648892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.649246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.649279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.649679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.649712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.650075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.650109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.650543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.650575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.650933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.650963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.651322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.651355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.651714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.651744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.652098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.652131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.652491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.652525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.652861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.652892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.653250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.653283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.653640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.653671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.653913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.653946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.654310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.654343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.654571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.654604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.654949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.654981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.655339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.655373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.655730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.655762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.656120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.656152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.656505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.656536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.656893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.656923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.657293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.657326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.657680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.657710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.658076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.658108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.658435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.658467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.991 [2024-10-11 12:06:32.658858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.991 [2024-10-11 12:06:32.658891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.991 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.659123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.659157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.659516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.659550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.659905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.659936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.660308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.660343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.660694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.660726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.660960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.660995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.661453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.661486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.661711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.661745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.662094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.662127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.662391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.662422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.662775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.662807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.663162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.663197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.663563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.663595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.663940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.663973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.664305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.992 [2024-10-11 12:06:32.664337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:29.992 qpair failed and we were unable to recover it. 00:29:29.992 [2024-10-11 12:06:32.664718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.267 [2024-10-11 12:06:32.664750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.267 qpair failed and we were unable to recover it. 00:29:30.267 [2024-10-11 12:06:32.665085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.267 [2024-10-11 12:06:32.665123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.267 qpair failed and we were unable to recover it. 00:29:30.267 [2024-10-11 12:06:32.665359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.267 [2024-10-11 12:06:32.665393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.267 qpair failed and we were unable to recover it. 00:29:30.267 [2024-10-11 12:06:32.665790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.267 [2024-10-11 12:06:32.665823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.267 qpair failed and we were unable to recover it. 00:29:30.267 [2024-10-11 12:06:32.666181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.267 [2024-10-11 12:06:32.666215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.267 qpair failed and we were unable to recover it. 00:29:30.267 [2024-10-11 12:06:32.666466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.267 [2024-10-11 12:06:32.666499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.267 qpair failed and we were unable to recover it. 00:29:30.267 [2024-10-11 12:06:32.666853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.267 [2024-10-11 12:06:32.666885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.267 qpair failed and we were unable to recover it. 00:29:30.267 [2024-10-11 12:06:32.667244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.267 [2024-10-11 12:06:32.667277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.267 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.667707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.667740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.668099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.668131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.668566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.668598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.668947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.668980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.669338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.669373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.669732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.669762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.670132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.670165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.670531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.670564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.670918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.670951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.671209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.671243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.671485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.671519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.671869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.671902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.672242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.672276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.672623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.672657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.673012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.673045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.673394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.673429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.673791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.673824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.674183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.674224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.674471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.674506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.674898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.674931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.675257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.675291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.675649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.675682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.676056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.676098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.676451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.676485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.676844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.676876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.677233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.677264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.677633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.677664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.678023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.678054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.678436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.678468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.678814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.678845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.679169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.679201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.679566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.679599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.679959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.679994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.680361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.680394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.680806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.680838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.681202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.681234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.681599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.681630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.681882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.681913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.682290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.268 [2024-10-11 12:06:32.682322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.268 qpair failed and we were unable to recover it. 00:29:30.268 [2024-10-11 12:06:32.682696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.682728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.682962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.682992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.683353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.683386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.683753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.683785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.684139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.684171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.684540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.684572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.684925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.684958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.685316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.685347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.685709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.685741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.686107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.686139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.686494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.686526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.686885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.686919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.687295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.687328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.687685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.687716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.688076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.688110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.688451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.688484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.688842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.688872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.689231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.689262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.689626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.689660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.690014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.690051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.690438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.690472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.690831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.690864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.691199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.691231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.691586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.691617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.691968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.691999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.692360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.692393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.692751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.692781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.693177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.693209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.693561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.693591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.693953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.693985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.694363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.694397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.694747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.269 [2024-10-11 12:06:32.694779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.269 qpair failed and we were unable to recover it. 00:29:30.269 [2024-10-11 12:06:32.695130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.695163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.695537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.695569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.695818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.695849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.696172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.696204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.696556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.696586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.696958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.696990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.697352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.697383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.697622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.697654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.698015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.698047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.698451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.698485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.698831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.698864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.699225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.699260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.699608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.699640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.699998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.700028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.700318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.700364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.700710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.700742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.701104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.701136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.701494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.701524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.701773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.701807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.702162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.702194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.702567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.702599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.702961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.702993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.703351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.703385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.703747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.703778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.704134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.704166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.704523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.704554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.704928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.704962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.705322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.705355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.705711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.705743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.706105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.706138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.706489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.706520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.706752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.706786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.707149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.707181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.707555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.707586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.707838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.707869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.708216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.708249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.708604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.708635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.270 [2024-10-11 12:06:32.708997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.270 [2024-10-11 12:06:32.709030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.270 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.709368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.709401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.709752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.709782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.710143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.710177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.710538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.710569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.710962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.710996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.711354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.711388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.711747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.711778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.712152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.712184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.712531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.712561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.712918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.712950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.713324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.713357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.713696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.713728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.714082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.714114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.714469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.714501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.714859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.714890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.715256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.715289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.715639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.715670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.716026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.716072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.716399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.716430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.716785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.716817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.717180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.717211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.717480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.717513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.717858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.717890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.718248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.718287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.718638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.718668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.719028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.719061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.719433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.719465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.719814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.719844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.720206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.720238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.720648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.720682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.721034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.721073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.721325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.721356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.721607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.721639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.721990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.722023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.722306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.722340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.722591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.722621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.722978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.723010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.723372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.723404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.271 [2024-10-11 12:06:32.723633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.271 [2024-10-11 12:06:32.723664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.271 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.724026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.724057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.724432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.724464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.724816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.724847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.725201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.725233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.725597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.725627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.725985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.726023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.726420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.726455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.726814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.726845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.727204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.727236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.727605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.727638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.727986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.728017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.728406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.728439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.728791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.728821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.729133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.729165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.729531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.729564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.729919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.729952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.730315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.730347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.730695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.730726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.731082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.731114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.731470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.731502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.731869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.731902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.732247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.732280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.732645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.732677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.733026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.733058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.733421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.733454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.733818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.733851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.734198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.734231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.734583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.734613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.734963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.734994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.735351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.735383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.735739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.735772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.736125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.736157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.736505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.736537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.736896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.272 [2024-10-11 12:06:32.736927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.272 qpair failed and we were unable to recover it. 00:29:30.272 [2024-10-11 12:06:32.737207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.737242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.737593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.737626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.737992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.738024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.738367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.738399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.738751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.738785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.739145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.739179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.739576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.739609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.739949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.739982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.740333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.740367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.740725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.740757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.741109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.741143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.741498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.741531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.741888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.741926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.742289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.742324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.742674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.742708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.743089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.743124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.743481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.743518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.743900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.743934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.744293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.744328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.744687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.744720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.745148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.745182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.745549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.745582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.745925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.745958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.746311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.746345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.746778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.746811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.747247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.747280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.747678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.747711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.748141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.748175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.748521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.748555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.748916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.748950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.749317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.749351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.749716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.749749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.750108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.750142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.750498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.750531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.750884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.750917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.751279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.751314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.751695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.751729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.752136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.752170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.752524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.752557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.752911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.752949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.753310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.273 [2024-10-11 12:06:32.753344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.273 qpair failed and we were unable to recover it. 00:29:30.273 [2024-10-11 12:06:32.753704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.753737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.754139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.754173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.754524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.754557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.754908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.754941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.755199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.755233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.755610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.755644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.755994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.756027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.756429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.756464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.756824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.756858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.757207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.757242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.757599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.757632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.757988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.758020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.758394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.758429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.758782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.758812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.759191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.759224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.759597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.759630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.759985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.760018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.760377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.760409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.760653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.760684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.761032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.761074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.761403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.761433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.761682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.761715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.761958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.761987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.762355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.762389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.762752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.762784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.763143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.763176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.763528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.763561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.763922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.763954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.764275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.764306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.764646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.764678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.765019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.765051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.765443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.765475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.765753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.765783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.766152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.766186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.766551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.766583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.766934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.766968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.767324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.767356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.767717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.767748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.768117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.768149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.768527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.768565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.274 [2024-10-11 12:06:32.768825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.274 [2024-10-11 12:06:32.768860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.274 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.769214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.769248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.769607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.769640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.769995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.770026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.770407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.770440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.770814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.770845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.771201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.771234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.771600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.771633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.771981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.772013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.772416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.772449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.772790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.772821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.773182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.773215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.773569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.773600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.773969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.774002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.774399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.774435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.774781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.774813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.775181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.775214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.775480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.775515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.775862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.775895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.776162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.776195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.776563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.776593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.776960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.776993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.777362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.777393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.777746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.777778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.778132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.778164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.778526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.778558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.778917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.778947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.779320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.779353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.779600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.779634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.779981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.780012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.780370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.780402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.780760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.780792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.781143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.781174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.781540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.781572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.781934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.781965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.782317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.782351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.782681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.782714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.783084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.783120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.783469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.783501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.783856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.783886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.784252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.784286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.275 qpair failed and we were unable to recover it. 00:29:30.275 [2024-10-11 12:06:32.784638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.275 [2024-10-11 12:06:32.784668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.785030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.785073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.785413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.785444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.785802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.785833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.786177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.786209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.786571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.786602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.786954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.786985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.787334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.787366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.787729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.787759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.788120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.788153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.788501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.788532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.788879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.788910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.789271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.789302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.789654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.789688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.790034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.790088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.790432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.790465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.790797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.790830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.791096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.791129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.791505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.791538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.791893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.791925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.792294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.792328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.792681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.792714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.793083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.793114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.793370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.793401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.793749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.793782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.794134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.794166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.794523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.794559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.794793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.794825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.795171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.795205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.795564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.795597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.795953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.795983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.796347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.796379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.796748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.796778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.797024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.797055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.797450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.797484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.797841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.797874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.276 qpair failed and we were unable to recover it. 00:29:30.276 [2024-10-11 12:06:32.798115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.276 [2024-10-11 12:06:32.798150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.798501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.798533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.798787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.798820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.799180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.799213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.799612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.799646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.800075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.800109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.800465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.800497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.800861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.800891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.801150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.801186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.801626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.801657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.802029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.802072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.802433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.802465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.802830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.802860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.803226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.803258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.803621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.803652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.804015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.804047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.804453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.804486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.804837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.804868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.805266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.805301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.805657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.805689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.806044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.806092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.806470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.806502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.806864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.806896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.807205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.807238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.807604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.807637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.807999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.808034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.808404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.808436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.808791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.808825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.809050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.809099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.809450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.809484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.809833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.809866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.810220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.810255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.810660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.810693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.811050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.811095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.811461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.811492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.811739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.811775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.812127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.812161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.812518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.812550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.812899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.812932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.813319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.813351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.813708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.277 [2024-10-11 12:06:32.813741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.277 qpair failed and we were unable to recover it. 00:29:30.277 [2024-10-11 12:06:32.814097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.814130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.814493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.814527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.814874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.814906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.815262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.815296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.815647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.815680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.816085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.816118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.816477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.816517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.816929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.816962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.817313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.817347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.817708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.817741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.818108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.818143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.818494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.818530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.818902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.818937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.819285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.819320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.819672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.819705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.820074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.820109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.820432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.820465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.820706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.820744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.821089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.821123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.821520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.821554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.821904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.821936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.822309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.822344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.822712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.822744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.822975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.823011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.823398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.823431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.823791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.823825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.824177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.824209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.824577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.824609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.824967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.824997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.825358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.825392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.825745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.825778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.826135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.826171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.826525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.826559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.826958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.826992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.827340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.827373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.827741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.827774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.828125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.828158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.828525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.828559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.828924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.828957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.829352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.829385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.278 qpair failed and we were unable to recover it. 00:29:30.278 [2024-10-11 12:06:32.829753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.278 [2024-10-11 12:06:32.829786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.830139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.830174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.830532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.830564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.830928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.830960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.831325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.831360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.831771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.831803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.832158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.832192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.832570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.832603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.832958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.832991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.833360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.833393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.833753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.833783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.834147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.834180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.834545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.834578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.834928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.834960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.835308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.835343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.835696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.835730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.836085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.836119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.836487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.836519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.836906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.836943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.837301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.837334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.837686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.837718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.838101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.838135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.838510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.838541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.838893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.838926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.839289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.839323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.839678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.839710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.840083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.840117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.840521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.840553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.840914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.840946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.841317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.841353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.841707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.841741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.842103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.842138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.842560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.842594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.842945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.842978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.843328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.843360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.843715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.843749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.844114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.844148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.844414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.844448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.844837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.844870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.845223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.845255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.845626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.279 [2024-10-11 12:06:32.845659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.279 qpair failed and we were unable to recover it. 00:29:30.279 [2024-10-11 12:06:32.845889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.845920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.846256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.846291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.846645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.846676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.847027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.847060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.847448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.847486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.847733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.847765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.848121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.848154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.848581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.848615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.848959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.848992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.849362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.849396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.849639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.849672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.849903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.849938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.850229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.850264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.850655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.850689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.851057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.851104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.851448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.851481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.851848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.851880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.852233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.852268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.852628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.852660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.853037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.853082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.853451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.853483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.853918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.853950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.854310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.854344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.854698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.854731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.855123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.855157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.855517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.855549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.855910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.855942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.856306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.856340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.856688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.856721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.857081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.857115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.857493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.857524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.857877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.857909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.858250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.858286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.858639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.858671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.858966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.859002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.859365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.859398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.859747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.859779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.860150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.860184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.860543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.860576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.860824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.860854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.861095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.280 [2024-10-11 12:06:32.861130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.280 qpair failed and we were unable to recover it. 00:29:30.280 [2024-10-11 12:06:32.861479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.861513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.861870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.861902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.862157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.862191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.862559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.862590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.862944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.862984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.863317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.863349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.863717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.863748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.864106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.864140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.864386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.864418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.864676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.864709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.865086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.865120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.865475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.865507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.865905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.865938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.866312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.866346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.866690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.866723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.867119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.867152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.867522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.867553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.867912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.867945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.868313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.868347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.868654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.868688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.869035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.869093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.869346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.869380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.869769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.869801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.870147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.870182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.870550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.870582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.870949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.870979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.871350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.871383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.871730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.871760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.872120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.872153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.872510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.872542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.872882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.872916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.873273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.873313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.873688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.873719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.874093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.874128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.874534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.874565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.874912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.281 [2024-10-11 12:06:32.874942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.281 qpair failed and we were unable to recover it. 00:29:30.281 [2024-10-11 12:06:32.875312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.875345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.875696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.875726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.876092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.876124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.876489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.876519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.876886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.876918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.877295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.877327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.877696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.877727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.878093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.878128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.878489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.878521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.878878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.878912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.879273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.879306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.879658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.879689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.880036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.880079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.880472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.880504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.880842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.880875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.881229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.881263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.881621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.881653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.882012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.882045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.882413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.882448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.882805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.882839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.883194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.883229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.883583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.883616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.883962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.883993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.884378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.884410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.884755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.884785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.885137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.885169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.885492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.885522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.885883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.885915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.886186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.886218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.886541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.886572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.886935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.886969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.887302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.887336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.887569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.887602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.887959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.887990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.888163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.888197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.888564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.888596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.888957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.888996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.889365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.889400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.889760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.889792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.890220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.890254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.282 [2024-10-11 12:06:32.890601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.282 [2024-10-11 12:06:32.890634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.282 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.890992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.891022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.891425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.891458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.891857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.891890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.892240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.892273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.892625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.892655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.893019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.893049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.893419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.893450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.893810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.893842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.894200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.894231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.894592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.894624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.894937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.894968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.895328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.895361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.895716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.895746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.896121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.896156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.896511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.896542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.896890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.896923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.897202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.897234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.897609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.897641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.898005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.898038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.898379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.898411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.898645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.898678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.899047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.899099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.899368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.899407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.899784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.899817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.900174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.900207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.900555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.900587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.900945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.900977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.901313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.901347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.901702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.901732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.902103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.902135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.902489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.902519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.902877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.902908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.903274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.903305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.903749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.903781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.904109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.904142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.904523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.904554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.904915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.904946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.905317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.905350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.905580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.905610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.905985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.906018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.283 [2024-10-11 12:06:32.906377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.283 [2024-10-11 12:06:32.906410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.283 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.906767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.906801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.907154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.907187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.907549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.907582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.907943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.907975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.908328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.908362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.908585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.908615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.908991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.909023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.909421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.909453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.909822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.909854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.910216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.910248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.910610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.910641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.911011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.911042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.911438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.911471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.911821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.911852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.912201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.912235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.912525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.912556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.912940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.912972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.913326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.913357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.913692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.913724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.914082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.914114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.914392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.914425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.914774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.914806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.915159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.915199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.915449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.915481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.915830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.915861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.916227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.916260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.916617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.916647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.917004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.917035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.917314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.917346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.917732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.917763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.918110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.918142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.918538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.918570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.918929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.918960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.919331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.919363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.919717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.919748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.920090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.920124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.920484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.920514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.920874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.920905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.921263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.921295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.921650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.921683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.284 qpair failed and we were unable to recover it. 00:29:30.284 [2024-10-11 12:06:32.922047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.284 [2024-10-11 12:06:32.922089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.922448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.922480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.922838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.922869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.923231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.923264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.923623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.923654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.924016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.924048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.924415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.924446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.924807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.924838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.925205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.925237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.925603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.925634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.925990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.926021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.926260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.926293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.926650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.926680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.927043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.927089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.927515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.927547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.927895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.927928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.928293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.928327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.928679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.928710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.929078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.929110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.929471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.929501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.929907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.929939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.930303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.930335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.930545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.930579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.930962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.930996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.931351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.931383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.931750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.931780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.932032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.932078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.932421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.932453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.932825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.932858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.933205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.933237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.933600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.933631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.933991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.934022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.934208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.934240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.285 qpair failed and we were unable to recover it. 00:29:30.285 [2024-10-11 12:06:32.934621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.285 [2024-10-11 12:06:32.934653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.934968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.935000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.935362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.935398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.935751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.935782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.936057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.936104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.936511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.936544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.936939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.936970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.937301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.937336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.937699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.937731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.938095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.938127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.938479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.938510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.938858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.938888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.939259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.939291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.939650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.939680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.940035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.940074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.940438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.940469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.940836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.940867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.941233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.941271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.941641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.941672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.942024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.942055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.942394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.942425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.942778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.942814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.943174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.943207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.943558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.943590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.943941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.943975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.944322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.944356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.944724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.944757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.945120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.945154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.945524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.945556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.945921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.945952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.946312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.946343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.946698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.946728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.947099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.947132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.947483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.947513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.947873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.947904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.948253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.948284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.948652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.948683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.949048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.949088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.949455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.949486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.949844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.949873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.950225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.286 [2024-10-11 12:06:32.950258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.286 qpair failed and we were unable to recover it. 00:29:30.286 [2024-10-11 12:06:32.950613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.950644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.950997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.951028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.951423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.951457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.951859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.951892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.952232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.952266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.952688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.952721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.953081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.953115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.953485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.953518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.953896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.953929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.954288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.954320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.954675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.954708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.955082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.955116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.955481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.955514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.955872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.955905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.956248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.956281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.956635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.956667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.957032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.957072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.287 [2024-10-11 12:06:32.957435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.287 [2024-10-11 12:06:32.957474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.287 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.957868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.957903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.958242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.958280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.958625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.958657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.959001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.959033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.959401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.959433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.959788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.959820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.960051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.960100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.960473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.960504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.960742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.960772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.961121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.961157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.961398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.961431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.961561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.961594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.961962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.961995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.962266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.962299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.962650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.962679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.963049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.963092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.963435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.963467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.963828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.963863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.964123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.964156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.964516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.964549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.964894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.964926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.562 qpair failed and we were unable to recover it. 00:29:30.562 [2024-10-11 12:06:32.965303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.562 [2024-10-11 12:06:32.965337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.965701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.965731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.965992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.966027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.966418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.966451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.966803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.966835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.967215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.967255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.967604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.967634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.968002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.968033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.968382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.968414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.968779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.968810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.969035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.969094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.969465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.969497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.969851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.969882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.970226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.970259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.970614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.970645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.971082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.971115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.971473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.971503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.971862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.971894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.972251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.972284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.972644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.972676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.973040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.973079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.973447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.973477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.973837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.973868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.974227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.974259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.974584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.974615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.974972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.975003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.975279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.975312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.975695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.975727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.976083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.976117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.976486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.976517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.976871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.976901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.977271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.977304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.977658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.977688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.978051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.978096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.978469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.978502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.978870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.978903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.979249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.979284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.979637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.979667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.980024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.980056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.980420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.980451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.980785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.563 [2024-10-11 12:06:32.980816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.563 qpair failed and we were unable to recover it. 00:29:30.563 [2024-10-11 12:06:32.981179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.981210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.981569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.981601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.981959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.981989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.982362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.982393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.982732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.982762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.983123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.983162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.983500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.983533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.983846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.983879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.984238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.984269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.984512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.984547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.984918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.984951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.985317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.985350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.985715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.985746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.986105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.986136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.986502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.986533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.986889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.986921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.987242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.987275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.987645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.987677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.988031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.988072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.988462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.988494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.988864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.988894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.989257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.989289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.989643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.989674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.990036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.990075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.990432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.990463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.990823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.990854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.991219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.991251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.991483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.991519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.991862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.991895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.992262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.992293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.992648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.992679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.993034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.993077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.993435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.993473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.993822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.993855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.994121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.994153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.994500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.994534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.994942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.994974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.995353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.995387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.995737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.995769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.996128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.996160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.996534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.564 [2024-10-11 12:06:32.996565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.564 qpair failed and we were unable to recover it. 00:29:30.564 [2024-10-11 12:06:32.996799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.996832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:32.997188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.997221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:32.997587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.997619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:32.997980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.998012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:32.998374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.998408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:32.998693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.998725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:32.998984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.999015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:32.999420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.999453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:32.999809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:32.999839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.000143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.000177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.000526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.000556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.000923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.000955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.001322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.001353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.001709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.001743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.002093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.002126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.002529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.002562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.002910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.002943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.003309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.003340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.003736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.003768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.004165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.004199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.004654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.004687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.005037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.005078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.005432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.005464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.005836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.005867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.006230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.006263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.006612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.006642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.006899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.006931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.007299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.007333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.007682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.007714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.008085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.008120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.008469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.008500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.008859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.008890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.009250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.009288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.009643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.009675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.010037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.010077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.010429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.010462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.010821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.010855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.011211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.011244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.011598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.011628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.011988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.565 [2024-10-11 12:06:33.012019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.565 qpair failed and we were unable to recover it. 00:29:30.565 [2024-10-11 12:06:33.012392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.012426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.012782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.012814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.013043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.013086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.013473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.013504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.013915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.013946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.014342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.014375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.014775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.014808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.015166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.015201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.015554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.015587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.015937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.015967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.016333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.016366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.016731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.016762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.017121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.017154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.017503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.017537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.017896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.017930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.018287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.018322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.018668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.018701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.019056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.019111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.019459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.019493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.019857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.019895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.020260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.020293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.020651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.020684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.021037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.021080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.021471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.021503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.021882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.021914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.022292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.022326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.022712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.022744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.023102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.023136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.023490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.023524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.023857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.023890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.024247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.024280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.024627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.024660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.025014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.025047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.025428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.025463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.025794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.025827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.566 qpair failed and we were unable to recover it. 00:29:30.566 [2024-10-11 12:06:33.026179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.566 [2024-10-11 12:06:33.026212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.026564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.026599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.026967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.027001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.027415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.027449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.027803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.027837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.028200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.028234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.028602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.028636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.028997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.029029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.029428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.029463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.029817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.029849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.030213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.030247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.030598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.030630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.030967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.031001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.031350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.031383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.031741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.031773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.032135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.032170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.032572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.032604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.034508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.034577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.034975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.035013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.035391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.035426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.035775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.035810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.036163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.036198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.036430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.036465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.036658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.036691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.037055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.037108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.037368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.037411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.037702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.037736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.038098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.038133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.038492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.038524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.038881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.038915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.039174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.039207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.039567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.039600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.039845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.039886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.040273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.040308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.040655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.040689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.041044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.041087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.041445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.041477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.041840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.041873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.042228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.042263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.042651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.042683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.043046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.567 [2024-10-11 12:06:33.043090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.567 qpair failed and we were unable to recover it. 00:29:30.567 [2024-10-11 12:06:33.043486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.043519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.043882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.043915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.044509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.044553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.044946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.044979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.045353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.045390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.045735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.045769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.046122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.046157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.046528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.046561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.046985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.047018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.047384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.047418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.047772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.047806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.048159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.048194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.048590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.048624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.048981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.049014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.049406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.049441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.049806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.049839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.050190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.050225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.050580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.050614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.050967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.051002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.051375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.051409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.051758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.051790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.052163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.052199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.052597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.052631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.052872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.052905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.053277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.053311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.053688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.053722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.054087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.054123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.054503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.054537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.054894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.054927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.055290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.055326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.055759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.055792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.056146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.056182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.056544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.056577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.056930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.056963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.057328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.057361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.057617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.057650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.057868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.057901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.058142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.058176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.058564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.058597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.058977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.059011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.568 [2024-10-11 12:06:33.059390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.568 [2024-10-11 12:06:33.059430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.568 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.059781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.059813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.060183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.060218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.060633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.060666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.060934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.060969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.061338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.061373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.061725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.061759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.062154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.062188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.062583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.062616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.062848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.062879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.063240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.063272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.063669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.063702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.064050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.064115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.064493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.064527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.064882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.064915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.065277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.065311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.065704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.065738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.066091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.066126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.066520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.066553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.066908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.066941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.067308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.067344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.067714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.067747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.068085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.068121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.068536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.068570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.068931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.068967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.069319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.069354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.069752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.069786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.070140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.070173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.070536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.070570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.070931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.070964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.071321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.071354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.071742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.071777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.072125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.072162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.072530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.072562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.072916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.072951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.073335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.073370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.073724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.073757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.074130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.074164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.074523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.074557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.074955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.074989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.075379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.075414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.569 [2024-10-11 12:06:33.075782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.569 [2024-10-11 12:06:33.075815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.569 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.076058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.076130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.076524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.076559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.076815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.076845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.077113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.077146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.077510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.077543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.077940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.077973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.078361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.078397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.078759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.078792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.079014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.079045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.082114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.082176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.082582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.082618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.082995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.083039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.083425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.083458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.083826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.083860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.084214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.084249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.084618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.084653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.085001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.085036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.085327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.085359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.085709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.085743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.086102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.086141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.086502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.086534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.086896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.086932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.087257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.087292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.087653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.087688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.088115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.088151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.088551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.088584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.088942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.088975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.089340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.089375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.089731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.089764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.090145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.090178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.090553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.090586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.090866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.090899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.091296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.091329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.091655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.091686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.091931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.091969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.092242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.092278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.092518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.092550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.092817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.092854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.093117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.093160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.093539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.093573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.570 qpair failed and we were unable to recover it. 00:29:30.570 [2024-10-11 12:06:33.093926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.570 [2024-10-11 12:06:33.093959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.094318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.094356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.094580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.094615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.094892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.094925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.095135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.095170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.095400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.095432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.095780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.095812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.096173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.096210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.096598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.096632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.096860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.096893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.097171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.097206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.097455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.097491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.097751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.097786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.098004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.098037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.098305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.098336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.098596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.098628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.098991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.099024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.099296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.099331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.099697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.099730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.100102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.100136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.100524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.100557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.100915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.100948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.101345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.101377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.101747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.101777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.101990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.102026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.102432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.102464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.102862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.102895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.103148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.103182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.103548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.103581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.103948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.103981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.104293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.104328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.104697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.104731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.105088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.105122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.105533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.105565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.105802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.105833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.106194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.106226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.571 [2024-10-11 12:06:33.106586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.571 [2024-10-11 12:06:33.106621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.571 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.106982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.107013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.107415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.107448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.107792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.107834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.108186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.108221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.108475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.108508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.108876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.108909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.109252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.109284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.109663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.109696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.110047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.110092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.110438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.110472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.110835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.110867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.111241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.111275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.111644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.111675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.111923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.111957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.112292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.112326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.112721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.112755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.113157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.113192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.113552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.113585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.113949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.113981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.114358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.114392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.114746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.114776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.115018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.115052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.115205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.115250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.115629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.115663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.116061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.116113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.116514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.116546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.116922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.116955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.117318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.117354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.117536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.117566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.117927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.117968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.118313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.118345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.118700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.118733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.119094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.119127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.119457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.119489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.119855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.119886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.120240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.120271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.120510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.120542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.120902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.120933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.121312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.121346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.121702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.572 [2024-10-11 12:06:33.121734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.572 qpair failed and we were unable to recover it. 00:29:30.572 [2024-10-11 12:06:33.122096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.122131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.122394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.122426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.122789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.122821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.123180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.123214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.123575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.123606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.123955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.123985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.124368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.124402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.124658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.124691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.125023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.125055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.125453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.125485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.125852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.125884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.126261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.126295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.126669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.126701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.127155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.127190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.127565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.127597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.127957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.127990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.128378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.128411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.128768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.128800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.129169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.129204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.129574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.129606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.129965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.129996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.130371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.130403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.130754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.130786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.131142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.131176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.131551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.131583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.131940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.131972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.132331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.132366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.132767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.132800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.133031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.133086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.133446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.133478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.133843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.133883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.134239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.134273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.134634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.134666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.135021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.135054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.135445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.135478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.135851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.135883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.136235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.136269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.136622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.136653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.136918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.136949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.137319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.137352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.573 [2024-10-11 12:06:33.137707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.573 [2024-10-11 12:06:33.137739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.573 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.137992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.138026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.138408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.138442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.138849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.138881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.139235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.139275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.139633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.139665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.139984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.140017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.140399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.140432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.140791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.140824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.141219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.141253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.141605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.141639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.142015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.142046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.142427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.142460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.142814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.142847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.143207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.143240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.143593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.143624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.143986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.144020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.144298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.144338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.144697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.144730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.145093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.145128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.145483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.145514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.145875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.145908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.146148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.146184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.146450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.146480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.146827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.146859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.147221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.147256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.147631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.147663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.148038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.148083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.148474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.148506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.148863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.148894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.149141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.149173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.149531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.149564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.149788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.149821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.150181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.150214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.150570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.150603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.150959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.150991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.151342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.151377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.151737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.151769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.152138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.152173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.152551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.152583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.152942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.152973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.574 [2024-10-11 12:06:33.153313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.574 [2024-10-11 12:06:33.153348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.574 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.153582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.153618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.153846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.153877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.153997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.154029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.154448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.154482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.154837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.154870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.155238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.155272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.155631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.155665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.155896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.155929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.156224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.156258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.156627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.156659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.157012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.157046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.157408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.157439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.157779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.157813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.158162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.158196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.158556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.158587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.158807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.158838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.159250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.159290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.159636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.159670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.160026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.160058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.160470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.160504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.160871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.160904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.161291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.161325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.161691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.161723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.162173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.162208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.162527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.162558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.162963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.162995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.163335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.163371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.163601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.163632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.163919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.163951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.164287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.164322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.164554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.164589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.164992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.165026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.165277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.165311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.165664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.165697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.165932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.165967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.166351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.575 [2024-10-11 12:06:33.166386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.575 qpair failed and we were unable to recover it. 00:29:30.575 [2024-10-11 12:06:33.166618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.166653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.167005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.167038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.167331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.167366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.167605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.167638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.167896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.167929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.168334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.168368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.168714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.168747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.168885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.168916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.169319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.169352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.169701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.169735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.170107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.170143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.170503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.170535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.170772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.170803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.171213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.171246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.171593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.171627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.171970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.172001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.172362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.172396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.172643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.172678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.173088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.173121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.173466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.173502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.173868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.173899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.174248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.174285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.174643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.174676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.175035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.175092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.175479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.175511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.175873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.175908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.176288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.176321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.176571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.176602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.176951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.176983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.177317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.177351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.177714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.177745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.178087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.178122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.178525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.178557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.178908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.178940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.179335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.179370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.179740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.179773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.180154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.180187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.180532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.180565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.180923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.180957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.181324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.181357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.181732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.576 [2024-10-11 12:06:33.181766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.576 qpair failed and we were unable to recover it. 00:29:30.576 [2024-10-11 12:06:33.182110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.182144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.182514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.182546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.182914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.182948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.183323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.183356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.183735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.183769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.184144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.184177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.184530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.184565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.184933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.184971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.185332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.185366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.185733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.185765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.186125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.186159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.186508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.186539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.186902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.186934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.187294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.187327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.187692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.187725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.188083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.188116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.188480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.188514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.188879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.188911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.189284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.189318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.189699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.189731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.190091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.190126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.190490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.190523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.190891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.190923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.191054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.191113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.191501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.191536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.191791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.191824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.192228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.192262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.192618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.192652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.193004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.193037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.193465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.193497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.193909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.193941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.194289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.194323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.194693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.194726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.195108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.195141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.195568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.195600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.196017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.196050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.196440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.196473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.196835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.196868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.197222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.197255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.197644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.197677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.198030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.577 [2024-10-11 12:06:33.198076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.577 qpair failed and we were unable to recover it. 00:29:30.577 [2024-10-11 12:06:33.198419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.198452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.198894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.198928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.199292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.199325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.199697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.199730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.200054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.200097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.200468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.200500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.200852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.200884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.201311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.201344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.201778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.201810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.202053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.202096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.202365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.202398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.202765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.202798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.203149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.203183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.203541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.203573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.204011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.204043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.204445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.204478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.204838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.204869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.205122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.205155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.205510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.205542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.205878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.205912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.206256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.206289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.206642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.206673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.207082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.207115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.207366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.207397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.207513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.207542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.207912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.207944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.208367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.208401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.208615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.208645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.208994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.209026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.209305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.209341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.209709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.209740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.209970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.210002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.210414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.210448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.210806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.210837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.211208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.211249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.211491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.211527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.211908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.211941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.212190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.212222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.212566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.212599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.212955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.212987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.578 qpair failed and we were unable to recover it. 00:29:30.578 [2024-10-11 12:06:33.213362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.578 [2024-10-11 12:06:33.213395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.213760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.213793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.214182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.214215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.214446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.214479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.214844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.214876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.215235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.215269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.215628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.215660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.216028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.216059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.216452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.216485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.216829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.216861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.217216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.217250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.217624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.217656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.218018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.218051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.218466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.218499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.218856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.218888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.219251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.219288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.219652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.219682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.220036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.220079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.220457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.220491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.220848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.220879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.221124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.221156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.221545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.221577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.221823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.221854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.222211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.222244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.222603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.222635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.223005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.223038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.223417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.223451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.223803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.223834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.224135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.224166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.224573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.224604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.224969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.225001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.225254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.225287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.225653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.225686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.226039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.226084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.226463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.226495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.226858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.579 [2024-10-11 12:06:33.226899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.579 qpair failed and we were unable to recover it. 00:29:30.579 [2024-10-11 12:06:33.227238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.227272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.227629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.227661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.228016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.228049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.228448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.228480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.228832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.228865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.229225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.229259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.229483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.229514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.229885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.229916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.230243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.230277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.230505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.230537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.230886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.230920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.231296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.231328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.231696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.231730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.232090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.232125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.232492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.232525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.232877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.232909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.233260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.233295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.233659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.233691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.234058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.234103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.234457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.234491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.234736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.234768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.235128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.235162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.235403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.235437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.235782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.235815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.236185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.236219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.236586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.236618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.236848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.236885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.237245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.237279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.237641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.237672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.238043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.238087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.238428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.238458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.238817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.238849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.239208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.239241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.239606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.239639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.239984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.240017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.240448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.240483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.240848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.240881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.241312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.241347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.241780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.241812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.242139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.242171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.242590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.580 [2024-10-11 12:06:33.242622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.580 qpair failed and we were unable to recover it. 00:29:30.580 [2024-10-11 12:06:33.242989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.243024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.243424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.243457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.243677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.243710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.244041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.244085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.244445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.244477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.244723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.244754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.245117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.245150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.245528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.245559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.245813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.245843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.246250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.246282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.246645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.246677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.246931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.246963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.247362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.247395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.247754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.247787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.248152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.248183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.248404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.248438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.248638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.248669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.249033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.249081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.249443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.249478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.249832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.249864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.250158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.250191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.250564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.250597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.250945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.250977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.251204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.251236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.251599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.251632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.251992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.252027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.252415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.252454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.252670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.252701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.581 [2024-10-11 12:06:33.253058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.581 [2024-10-11 12:06:33.253104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.581 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.253483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.253517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.253866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.253905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.254247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.254280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.254633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.254665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.255029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.255059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.255435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.255467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.255841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.255875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.256229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.256262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.256629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.256662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.257013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.257045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.257397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.257430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.257802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.257835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.258182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.258216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.258592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.258625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.258879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.258913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.259325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.259358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.259700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.259733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.260013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.260046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.260433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.260466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.260816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.260848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.261188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.261222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.261592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.261625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.261992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.262025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.262470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.262503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.262882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.262921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.263268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.263302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.263650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.263681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.264045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.264087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.264476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.264508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.264932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.264964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.265311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.265344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.265708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.265739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.266088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.859 [2024-10-11 12:06:33.266121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.859 qpair failed and we were unable to recover it. 00:29:30.859 [2024-10-11 12:06:33.266391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.266424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.266683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.266714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.267085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.267118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.267508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.267540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.267908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.267942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.268298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.268332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.268684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.268714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.269091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.269123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.269350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.269381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.269702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.269733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.270107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.270138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.270468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.270499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.270752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.270783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.271143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.271176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.271615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.271649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.272011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.272044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.272438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.272471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.272826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.272858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.273219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.273254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.273601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.273635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.273988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.274020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.274406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.274439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.274801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.274834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.275202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.275234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.275597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.275630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.276009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.276043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.276436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.276469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.276834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.276864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.277128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.277161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.277401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.277435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.277804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.277836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.278200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.278235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.278587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.278627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.278853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.278887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.279234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.279267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.279614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.279647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.280079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.280112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.280521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.280553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.280776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.280807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.281200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.281234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.860 [2024-10-11 12:06:33.281467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.860 [2024-10-11 12:06:33.281498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.860 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.281735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.281767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.282138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.282173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.282417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.282449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.282787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.282822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.283218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.283251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.283627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.283658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.283903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.283934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.284198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.284231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.284534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.284565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.284915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.284948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.285075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.285108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.285390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.285424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.285653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.285686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.286038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.286090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.286332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.286364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.286727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.286758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.287091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.287123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.287401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.287433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.287659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.287696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.288043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.288111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.288443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.288475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.288854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.288886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.289225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.289257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.289624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.289656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.290008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.290040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.290431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.290465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.290803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.290836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.291192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.291225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.291576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.291609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.291970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.292003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.292372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.292404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.292765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.292797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.293133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.293167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.293429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.293459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.293833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.293866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.294095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.294128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.294494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.294526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.294891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.294925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.295273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.295307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.861 [2024-10-11 12:06:33.295684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.861 [2024-10-11 12:06:33.295718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.861 qpair failed and we were unable to recover it. 00:29:30.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2116637 Killed "${NVMF_APP[@]}" "$@" 00:29:30.862 [2024-10-11 12:06:33.296087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.296120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.296456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.296488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:30.862 [2024-10-11 12:06:33.296835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.296869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:30.862 [2024-10-11 12:06:33.297221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.297254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:30.862 [2024-10-11 12:06:33.297598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.297631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.862 [2024-10-11 12:06:33.297984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.298015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.298416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.298449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.298806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.298840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.299171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.299205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.299551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.299583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.300012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.300044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.300411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.300444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.300806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.300840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.301184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.301218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.301636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.301668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.301898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.301932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.302165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.302199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.302600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.302633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.303032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.303080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.303357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.303392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.303761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.303795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.304122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.304156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.304496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.304531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.304882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.304914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.305244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.305279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.305532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.305565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.305921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.305954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2117482 00:29:30.862 [2024-10-11 12:06:33.306211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.306244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2117482 00:29:30.862 [2024-10-11 12:06:33.306630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.306664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2117482 ']' 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:30.862 [2024-10-11 12:06:33.307013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.307049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.862 [2024-10-11 12:06:33.307471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.307504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.862 [2024-10-11 12:06:33.307733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.307767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.862 12:06:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.862 [2024-10-11 12:06:33.308159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.308195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.862 qpair failed and we were unable to recover it. 00:29:30.862 [2024-10-11 12:06:33.308561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.862 [2024-10-11 12:06:33.308595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.308987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.309023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.309210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.309245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.309623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.309656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.310019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.310053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.310520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.310559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.310933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.310968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.311328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.311362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.311634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.311669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.312089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.312125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.312421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.312455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.312835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.312869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.313271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.313305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.313564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.313600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.313843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.313877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.314244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.314279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.314645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.314685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.314925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.314960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.315303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.315337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.315698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.315731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.315994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.316027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.316419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.316453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.316812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.316846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.317095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.317131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.317526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.317561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.317922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.317955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.318270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.318306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.318655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.318688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.319039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.319094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.319532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.319565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.319917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.319952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.320355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.320390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.320742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.320776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.321026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.321077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.863 qpair failed and we were unable to recover it. 00:29:30.863 [2024-10-11 12:06:33.321440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.863 [2024-10-11 12:06:33.321475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.321825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.321859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.322223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.322257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.322626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.322658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.323022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.323054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.323311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.323342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.323706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.323740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.324094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.324128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.324547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.324579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.324803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.324834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.325048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.325094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.325444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.325476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.325815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.325849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.326118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.326154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.326540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.326574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.326925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.326956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.327219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.327252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.327616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.327649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.328005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.328037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.328355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.328387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.328743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.328778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.329159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.329190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.329576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.329609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.329986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.330021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.330326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.330362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.330727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.330759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.331141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.331184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.331562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.331595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.331844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.331876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.332152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.332185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.332562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.332594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.332953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.332985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.333157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.333190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.333444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.333475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.333724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.333756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.334003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.334036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.334339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.334374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.334765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.334798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.335167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.335201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.335588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.335621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.336006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.864 [2024-10-11 12:06:33.336038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.864 qpair failed and we were unable to recover it. 00:29:30.864 [2024-10-11 12:06:33.336226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.336260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.336627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.336660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.336882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.336918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.337292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.337326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.337465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.337496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.337826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.337858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.338275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.338309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.338687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.338719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.339123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.339158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.339424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.339455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.339672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.339706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.340097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.340130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.340382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.340413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.340631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.340665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.341086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.341119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.341478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.341512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.341884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.341917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.342267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.342299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.342673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.342706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.342975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.343008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.343433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.343468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.343847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.343880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.344239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.344272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.344643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.344675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.345120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.345153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.345480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.345511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.345880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.345921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.346277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.346311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.346667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.346701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.347103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.347136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.347391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.347421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.347785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.347819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.348199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.348234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.348663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.348697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.348940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.348972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.349323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.349360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.349743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.349776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.350146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.350180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.350548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.350580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.350815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.350847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.351242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.865 [2024-10-11 12:06:33.351277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.865 qpair failed and we were unable to recover it. 00:29:30.865 [2024-10-11 12:06:33.351511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.351543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.351912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.351945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.352289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.352324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.352550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.352582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.352932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.352966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.353358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.353389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.353622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.353657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.353908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.353940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.354314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.354348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.354597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.354628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.354745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.354778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.355136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.355172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.355536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.355577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.355931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.355962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.356338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.356373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.356748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.356785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.357149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.357181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.357559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.357592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.358023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.358058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.358450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.358483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.358851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.358884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.359246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.359282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.359642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.359675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.360045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.360095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.360465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.360497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.360860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.360891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.361239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.361274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.361648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.361684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.362054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.362100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.362480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.362514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.362882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.362915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.363291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.363326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.363698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.363730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.363985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.364016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.364406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.364440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.364816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.364848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.365096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.365131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.365539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.365571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.365809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.365841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.366087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.366120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.366365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.866 [2024-10-11 12:06:33.366397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.866 qpair failed and we were unable to recover it. 00:29:30.866 [2024-10-11 12:06:33.366759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.366791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.367028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.367059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.367320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.367351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.367617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.367646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.367878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.367910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.368249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.368283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.368659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.368692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.369055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.369101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.369462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.369495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.369720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.369751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.370162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.370195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.370453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.370483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.370842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.370881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.371242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.371276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.371633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.371666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.371892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.371924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 [2024-10-11 12:06:33.371920] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.372005] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.867 [2024-10-11 12:06:33.372341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.372375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.372615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.372643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.373034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.373078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.373481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.373513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.373879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.373911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.374266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.374301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.374677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.374708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.375077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.375111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.375514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.375548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.375854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.375887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.376249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.376282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.376645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.376679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.377031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.377079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.377456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.377489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.377867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.377899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.378275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.378310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.378552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.378586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.378953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.378986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.379388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.867 [2024-10-11 12:06:33.379426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.867 qpair failed and we were unable to recover it. 00:29:30.867 [2024-10-11 12:06:33.379662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.379693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.379924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.379954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.380321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.380353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.380709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.380745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.381111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.381142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.381444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.381474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.381820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.381850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.382263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.382295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.382649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.382679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.383060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.383200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.383556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.383586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.383963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.383997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.384388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.384420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.384791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.384819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.385226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.385257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.385721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.385751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.385992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.386022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.386331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.386363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.386628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.386657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.387027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.387057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.387308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.387338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.387737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.387768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.388159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.388191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.388487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.388518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.388920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.388950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.389294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.389325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.389567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.389601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.389881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.389912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.390291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.390322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.390453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.390484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.390865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.390909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.391272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.391303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.391712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.391742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.392112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.392144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.392529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.392558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.392939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.392970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.393325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.393358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.393774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.393805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.394171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.394201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.394560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.394589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.868 qpair failed and we were unable to recover it. 00:29:30.868 [2024-10-11 12:06:33.394971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.868 [2024-10-11 12:06:33.395000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.395386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.395421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.395769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.395799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.396159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.396191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.396423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.396452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.396814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.396844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.397189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.397219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.397603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.397634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.398002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.398031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.398219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.398250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.398486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.398518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.398787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.398818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.399041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.399085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.399328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.399360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.399712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.399742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.400058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.400113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.400238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.400266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.400664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.400694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.400947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.400976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.401374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.401405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.401752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.401783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.402186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.402217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.402584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.402613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.402867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.402898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.403269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.403300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.403669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.403699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.404092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.404124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.404499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.404530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.404909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.404941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.405311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.405343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.405714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.405743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.406000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.406039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.406329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.406361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.406732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.406765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.407145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.407177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.407573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.407602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.407877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.407906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.408287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.408317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.408654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.408684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.409073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.409105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.409495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.409524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.869 qpair failed and we were unable to recover it. 00:29:30.869 [2024-10-11 12:06:33.409899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.869 [2024-10-11 12:06:33.409930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.410342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.410376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.410743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.410774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.411146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.411178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.411574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.411605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.411988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.412018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.412481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.412512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.412882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.412911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.413286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.413318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.413725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.413756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.414130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.414163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.414546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.414576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.414842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.414871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.415288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.415320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.415662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.415694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.416060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.416117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.416499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.416528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.416896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.416927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.417340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.417370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.417737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.417768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.418172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.418203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.418552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.418582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.418935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.418966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.419313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.419344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.419709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.419739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.420119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.420150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.420406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.420436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.420806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.420836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.421214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.421245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.421423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.421453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.421841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.421870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.422109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.422141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.422507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.422538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.422944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.422974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.423319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.423349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.423738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.423768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.424149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.424184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.424588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.424619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.424843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.424872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.425233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.425264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.425626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.870 [2024-10-11 12:06:33.425654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.870 qpair failed and we were unable to recover it. 00:29:30.870 [2024-10-11 12:06:33.425907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.425940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.426370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.426405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.426635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.426668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.427083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.427116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.427378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.427413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.427775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.427807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.428040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.428085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.428421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.428453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.428819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.428852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.429218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.429252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.429604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.429636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.429999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.430031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.430423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.430457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.430821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.430852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.431218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.431254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.431700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.431733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.432093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.432126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.432518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.432557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.432915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.432948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.433302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.433335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.433698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.433732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.434096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.434129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.434377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.434410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.434770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.434802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.435174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.435208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.435536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.435566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.435816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.435848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.436210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.436243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.436610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.436642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.437010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.437042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.437407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.437440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.437727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.437761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.438117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.438152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.438401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.438434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.438793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.438826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.439190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.439222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.439605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.439637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.439895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.439925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.440289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.440322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.440660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.440692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.871 qpair failed and we were unable to recover it. 00:29:30.871 [2024-10-11 12:06:33.441053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.871 [2024-10-11 12:06:33.441099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.441212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.441244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.441500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.441532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.441926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.441959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.442335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.442367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.442729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.442763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.443129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.443163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.443525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.443558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.443918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.443949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.444303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.444336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.444704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.444739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.444993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.445028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.445437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.445470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.445841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.445872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.446226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.446260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.446613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.446647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.446994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.447027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.447403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.447437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.447675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.447709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.447856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.447887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.448139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.448174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.448583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.448614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.448981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.449013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.449386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.449421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.449702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.872 [2024-10-11 12:06:33.449792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.449822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.450195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.450229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.450610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.450642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.450996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.451030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.451411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.451444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.451715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.451746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.452124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.452157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.452517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.452554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.452924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.452957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.453315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.453347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.453573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.453607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.453976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.872 [2024-10-11 12:06:33.454011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.872 qpair failed and we were unable to recover it. 00:29:30.872 [2024-10-11 12:06:33.454263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.454294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.454675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.454707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.455074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.455108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.455463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.455493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.455846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.455880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.456102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.456135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.456396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.456428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.456876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.456908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.457159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.457192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.457586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.457619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.457982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.458014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.458385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.458419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.458770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.458802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.459166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.459200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.459557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.459590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.459966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.459999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.460451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.460484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.460723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.460754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.461121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.461155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.461275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.461305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.461645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.461677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.462040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.462084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.462321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.462353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.462608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.462641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.463003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.463035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.463418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.463452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.463818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.463850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.464094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.464127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.464402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.464433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.464832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.464865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.465230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.465264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.465643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.465673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.466039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.466085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.466455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.466486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.466890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.466922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.467315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.467349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.467733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.467780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.468143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.468177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.468558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.468589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.468970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.469003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.469384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-10-11 12:06:33.469416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.873 qpair failed and we were unable to recover it. 00:29:30.873 [2024-10-11 12:06:33.469802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.469835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.470236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.470270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.470629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.470660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.470886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.470915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.471301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.471335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.471695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.471727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.472094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.472127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.472494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.472527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.472888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.472921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.473309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.473342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.473579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.473611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.473970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.474003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.474367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.474402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.474795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.474827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.475199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.475231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.475611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.475645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.475892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.475927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.476293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.476327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.476703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.476735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.477080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.477116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.477459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.477492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.477853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.477886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.478222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.478254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.478506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.478538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.478775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.478809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.479169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.479202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.479575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.479606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.480042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.480089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.480456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.480488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.480856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.480889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.481235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.481269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.481498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.481531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.481898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.481931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.482319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.482351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.482764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.482797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.483044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.483088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.483469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.483502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.483770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.483801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.484178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.484211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.484592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.484625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.874 [2024-10-11 12:06:33.484996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-10-11 12:06:33.485028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.874 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.485421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.485456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.485815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.485848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.486225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.486260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.486620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.486652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.487009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.487043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.487475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.487510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.487868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.487901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.488294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.488328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.488785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.488818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.489076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.489108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.489479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.489513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.489909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.489943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.490315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.490349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.490737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.490771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.491128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.491160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.491545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.491577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.491938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.491972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.492455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.492487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.492727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.492758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.493123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.493157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.493389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.493420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.493769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.493800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.494145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.494185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.494561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.494596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.494864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.494898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.495164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.495197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.495568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.495600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.495947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.495977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.496209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.496241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.496460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.496490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.496766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.496800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.497153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.497185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.497555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.497589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.497814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.497846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.498089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.498122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.498500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.498531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.498901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.498937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.499289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.499322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.499667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.499701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.500051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-10-11 12:06:33.500094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.875 qpair failed and we were unable to recover it. 00:29:30.875 [2024-10-11 12:06:33.500365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.500396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.500673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.500705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.500967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.501002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.501307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.501340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.501698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.501731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.502097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.502130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.502490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.502522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.502856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.502888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.503223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.503256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.503493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.876 [2024-10-11 12:06:33.503507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.503539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.876 [2024-10-11 12:06:33.503551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.876 [2024-10-11 12:06:33.503551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with[2024-10-11 12:06:33.503558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.876 addr=10.0.0.2, port=4420 00:29:30.876 [2024-10-11 12:06:33.503568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.503981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.504015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.504408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.504441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.504794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.504826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.505082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.505116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.505499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.505531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.505667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:30.876 [2024-10-11 12:06:33.505785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.505820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.505826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:30.876 [2024-10-11 12:06:33.506039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:30.876 [2024-10-11 12:06:33.506040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:30.876 [2024-10-11 12:06:33.506298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.506329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.506693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.506724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.507096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.507129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.507493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.507526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.507888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.507921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.508283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.508316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.508668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.508702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.509060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.509105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.509468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.509500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.509867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.509899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.510274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.510308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.510666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.510698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.511075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.511109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.511490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.511523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.511888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.511920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.876 [2024-10-11 12:06:33.512294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.876 [2024-10-11 12:06:33.512327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.876 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.512615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.512646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.513001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.513039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.513414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.513447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.513807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.513839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.514115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.514148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.514529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.514562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.514937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.514970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.515314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.515347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.515701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.515733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.516087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.516120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.516498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.516530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.516899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.516931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.517331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.517364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.517603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.517637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.517897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.517929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.518143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.518176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.518562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.518594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.518959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.518993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.519334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.519366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.519502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.519534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.519787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.519819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.520033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.520075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.520458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.520490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.520713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.520744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.520987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.521019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.521143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.521174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.521405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.521438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.521791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.521823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.522189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.522222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.522605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.522638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.522849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.522880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.523236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.523269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.523642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.523675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.523915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.523948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.524310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.524344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.524724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.524756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.525137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.525175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.525542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.525574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.525923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.525955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.526332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.526366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.877 qpair failed and we were unable to recover it. 00:29:30.877 [2024-10-11 12:06:33.526721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.877 [2024-10-11 12:06:33.526753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.527110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.527143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.527497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.527535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.527891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.527924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.528299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.528335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.528696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.528728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.529092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.529126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.529482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.529515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.529860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.529893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.530244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.530278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.530642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.530676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.531110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.531143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.531503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.531536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.532011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.532045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.532436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.532469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.532672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.532703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.533035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.533084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.533546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.533578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.534006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.534038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.534432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.534465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.534870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.534903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.535134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.535167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.535409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.535443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.535706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.535741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.535995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.536027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.536426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.536459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.536639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.536671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.537017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.537049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.537428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.537461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.537716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.537747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.538005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.538038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.538384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.538417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.538760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.538794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.539161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.539195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.539432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.539463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.539878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.539910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.540282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.540315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.540713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.540745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.541100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.541133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.541504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.541537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.541654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.878 [2024-10-11 12:06:33.541684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.878 qpair failed and we were unable to recover it. 00:29:30.878 [2024-10-11 12:06:33.541782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.541811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.542027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.542058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.542440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.542473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.542800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.542833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.543195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.543228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.543591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.543623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.543979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.544009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.544372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.544406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.544804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.544836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.545269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.545301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.545672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.545705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.546075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.546108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.546457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.546488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.546837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.546869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.547230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.547264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.547617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.547649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:30.879 [2024-10-11 12:06:33.547893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.879 [2024-10-11 12:06:33.547924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:30.879 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.548310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.548345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.548704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.548740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.549096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.549128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.549497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.549527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.549771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.549803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.550019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.550050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.550423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.550455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.550828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.550859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.551258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.551290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.551640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.551673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.552034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.552078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.552284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.552315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.552669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.552706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.553079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.553112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.553455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.553487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.553740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.553775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.553974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.554006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.554225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.554256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.554365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.554396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.554869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.554901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.555268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.555304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.555512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.555544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.555757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.555790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.556177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.556211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.556585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.556616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.556975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.557009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.557421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.557455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.557828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.557863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.156 [2024-10-11 12:06:33.558228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.156 [2024-10-11 12:06:33.558263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.156 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.558643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.558676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.559057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.559120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.559478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.559509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.559875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.559910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.560284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.560319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.560657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.560690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.561002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.561035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.561328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.561360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.561752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.561785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.562050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.562098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.562450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.562483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.562861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.562893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.563243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.563277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.563543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.563579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.563871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.563905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.564114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.564147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.564496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.564530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.564760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.564793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.565042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.565085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.565339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.565373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.565612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.565648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.565891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.565924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.566016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.566046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.566293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.566326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.566699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.566739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.567120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.567154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.567498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.567530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.567888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.567924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.568308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.568345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.568700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.568734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.569109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.569143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.569518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.569552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.569778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.569810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.570075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.570108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.570356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.570387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.570602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.570633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.571032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.571081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.571464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.571498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.571862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.571895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.572244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.157 [2024-10-11 12:06:33.572279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.157 qpair failed and we were unable to recover it. 00:29:31.157 [2024-10-11 12:06:33.572631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.572664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.573015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.573049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.573250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.573288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.573655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.573688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.574036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.574079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.574303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.574336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.574696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.574729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.575186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.575221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.575468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.575500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.575863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.575894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.576125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.576162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.576535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.576573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.576947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.576979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.577328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.577362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.577622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.577656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.578003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.578035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.578419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.578454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.578814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.578846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.579257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.579291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.579644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.579678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.580035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.580099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.580470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.580504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.580854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.580887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.581295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.581330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.581557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.581591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.581957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.581991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.582356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.582391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.582753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.582787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.583223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.583257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.583625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.583659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.584020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.584056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.584448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.584482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.584834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.584868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.585229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.585263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.585632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.585665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.586032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.586076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.586301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.586335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.586706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.586739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.587107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.587141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.587247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.587278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.158 [2024-10-11 12:06:33.587518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.158 [2024-10-11 12:06:33.587555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.158 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.587772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.587807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.588085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.588120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.588220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.588254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.588601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.588636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.588994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.589028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.589422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.589457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.589823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.589858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.590126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.590161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.590512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.590545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.590911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.590945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.591279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.591312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.591671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.591711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.591918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.591952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.592160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.592193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.592519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.592553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.592908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.592943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.593295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.593329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.593683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.593716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.594088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.594123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.594474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.594506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.594871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.594905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.595297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.595332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.595684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.595719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.595970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.596003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.596404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.596438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.596838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.596873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.597244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.597278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.597529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.597563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.597917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.597951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.598323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.598356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.598712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.598745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.599099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.599134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.599522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.599555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.599922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.599957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.600343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.600377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.600744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.600776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.601142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.601176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.601398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.601431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.601641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.601679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.602110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-10-11 12:06:33.602144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-10-11 12:06:33.602534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.602567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.602928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.602963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.603343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.603377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.603584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.603617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.603851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.603888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.604239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.604272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.604639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.604673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.604918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.604952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.605325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.605359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.605718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.605752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.606186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.606220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.606580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.606614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.606965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.607002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.607274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.607308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.607539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.607573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.607809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.607843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.608090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.608127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.608487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.608522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.608890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.608924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.609309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.609344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.609599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.609636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.609991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.610024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.610419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.610454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.610699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.610731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.611086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.611122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.611431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.611465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.611693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.611727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.612087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.612121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.612473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.612506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.612866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.612900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.613120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.613156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.613406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.613440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.613795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.613828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.614200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.614238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-10-11 12:06:33.614476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-10-11 12:06:33.614511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.614748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.614782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.615140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.615177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.615551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.615586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.615963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.615997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.616211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.616258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.616643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.616679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.617041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.617087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.617454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.617488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.617690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.617723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.618095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.618129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.618383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.618415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.618765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.618799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.619157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.619191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.619555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.619587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.619968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.620001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.620222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.620255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.620620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.620652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.620872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.620905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.621147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.621181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.621559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.621591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.621949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.621981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.622335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.622369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.622618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.622654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.623007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.623040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.623428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.623462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.623822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.623855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.624112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.624149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.624562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.624596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.624954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.624987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.625200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.625233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.625479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.625512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.625873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.625913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.626236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.626269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.626522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.626555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.626888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.626921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.627302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.627335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.627549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.627582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.627979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.628013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.628373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.628405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.628771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.628802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-10-11 12:06:33.629172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-10-11 12:06:33.629205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.629627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.629660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.629869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.629901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.630301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.630335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.630583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.630615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.630986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.631019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.631393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.631426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.631676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.631708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.632078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.632111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.632472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.632504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.632736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.632770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.633145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.633178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.633398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.633429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.633542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.633575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.633821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.633854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.634080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.634115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.634485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.634517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.634720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.634750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.634995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.635028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.635280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.635315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.635638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.635671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.635881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.635914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.636303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.636336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.636695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.636728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.637104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.637138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.637536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.637569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.637932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.637964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.638333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.638365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.638721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.638754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.639106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.639141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.639518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.639551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.639758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.639788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.640141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.640181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.640418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.640450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.640819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.640851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.641220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.641252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.641495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.641526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.641961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.641993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.642355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.642388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.642603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.642633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.642862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-10-11 12:06:33.642894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-10-11 12:06:33.643277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.643311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.643683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.643715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.643957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.643994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.644256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.644293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.644518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.644550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.644893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.644927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.645298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.645331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.645700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.645731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.646092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.646126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.646481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.646513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.646913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.646945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.647312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.647345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.647694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.647727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.648097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.648132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.648494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.648528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.648903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.648937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.649305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.649338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.649547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.649578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.649948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.649980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.650342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.650375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.650727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.650760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.651118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.651152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.651490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.651523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.651878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.651910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.652301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.652336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.652710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.652742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.653157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.653191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.653554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.653587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.653841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.653875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.654241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.654275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.654615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.654649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.655018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.655050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.655452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.655486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.655853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.655886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.656120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.656153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.656513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.656546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.656885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.656918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.657294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.657327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.657684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.657716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-10-11 12:06:33.658096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-10-11 12:06:33.658129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.658497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.658529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.658867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.658900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.659271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.659303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.659621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.659655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.660010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.660042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.660441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.660475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.660700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.660732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.660889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.660923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.661259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.661293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.661671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.661704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.661959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.661991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.662242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.662276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.662636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.662666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.663016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.663048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.663440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.663472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.663834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.663868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.664229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.664263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.664635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.664667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.665036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.665084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.665326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.665364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.665722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.665755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.665999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.666033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.666352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.666388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.666717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.666751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.667009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.667042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.667293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.667328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.667578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.667611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.667987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.668025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.668409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.668446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.668788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.668822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.669185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.669220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.669583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.669616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.669976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.670007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.670366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.670400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.670759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.670794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.671041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.671086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-10-11 12:06:33.671468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-10-11 12:06:33.671502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.671864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.671897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.672277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.672310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.672664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.672697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.673060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.673108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.673462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.673493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.673866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.673898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.674140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.674171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.674572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.674604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.674970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.675003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.675216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.675248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.675488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.675520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.675893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.675927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.676309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.676343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.676696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.676729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.676930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.676960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.677172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.677205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.677608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.677642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.677884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.677918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.678302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.678337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.678701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.678734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.679085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.679118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.679361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.679395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.679765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.679797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.680266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.680306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.680661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.680693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.680898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.680928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.681183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.681216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.681583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.681617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.681849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.681884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.682283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.682318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.682666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.682698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.682911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.682941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.683322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.683356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.683713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.683746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.683968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.684002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.684370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.684404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.684617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.684649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.684892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.684925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-10-11 12:06:33.685162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-10-11 12:06:33.685196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.685445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.685475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.685572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.685601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.685832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.685866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.686095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.686131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.686361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.686394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.686764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.686797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.687172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.687206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.687571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.687603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.687970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.688001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.688437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.688470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.688836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.688869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.689235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.689281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.689621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.689653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.690023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.690055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.690491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.690523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.690886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.690920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.691295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.691330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.691759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.691792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.691995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.692028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.692428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.692462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.692908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.692942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.693308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.693340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.693551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.693582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.693943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.693974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.694343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.694376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.694617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.694651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.694852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.694882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.695284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.695317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.695677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.695710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.695914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.695945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.696318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.696353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.696727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.696761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.697119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.697153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.697533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.697567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.697929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.697964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.698328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.698361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.698719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.698752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.699117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.699150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.699511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.699544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.699919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.699953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-10-11 12:06:33.700302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-10-11 12:06:33.700337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.700694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.700727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.701093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.701127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.701373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.701406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.701633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.701666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.701915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.701952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.702197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.702230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.702615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.702649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.702857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.702889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.703244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.703281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.703511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.703543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.703781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.703813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.704083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.704122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.704487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.704521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.704744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.704774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.705041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.705085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.705308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.705340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.705695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.705726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.706089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.706122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.706484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.706518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.706877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.706910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.707242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.707277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.707651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.707684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.708048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.708127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.708520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.708552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.708917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.708951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.709328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.709362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.709722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.709754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.710116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.710149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.710525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.710556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.710919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.710951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.711324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.711358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.711724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.711758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.711997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.712029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.712406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.712440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.712814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.712848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.713218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.713249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.713684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.713717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.714086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.714119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.714349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.714386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.714767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-10-11 12:06:33.714799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-10-11 12:06:33.715008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.715039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.715449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.715480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.715831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.715864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.716238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.716272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.716643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.716675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.717049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.717095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.717448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.717480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.717839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.717872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.718240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.718273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.718451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.718482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.718703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.718734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.718996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.719031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.719268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.719300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.719512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.719544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.719798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.719828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.720088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.720122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.720525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.720558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.720801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.720832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.721196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.721230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.721597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.721630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.721999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.722031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.722277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.722310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.722622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.722656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.723026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.723058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.723395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.723428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.723771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.723803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.724165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.724199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.724565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.724595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.724959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.724993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.725361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.725395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.725777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.725810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.726058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.726102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.726345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.726378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.726735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.726767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.726936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.726966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.727327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.727361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.727781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.727813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.728046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.728093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.728352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.728384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.728531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.728570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.728831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-10-11 12:06:33.728866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-10-11 12:06:33.729279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.729313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.729555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.729586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.729813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.729844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.730196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.730229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.730597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.730629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.730855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.730885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.731130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.731166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.731532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.731565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.731773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.731805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.732028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.732058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.732235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.732267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.732582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.732613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.732966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.733000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.733367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.733401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.733621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.733653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.734034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.734078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.734312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.734343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.734705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.734736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.735124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.735157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.735539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.735571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.735938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.735972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.736330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.736361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.736720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.736752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.737132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.737166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.737545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.737578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.737939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.737978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.738337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.738370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.738723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.738755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.739133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.739166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.739517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.739548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.739907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.739938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.740309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.740343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.740692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.740726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.741097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.741132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.741520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.741553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.741912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-10-11 12:06:33.741943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-10-11 12:06:33.742323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.742357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.742716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.742748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.743113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.743148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.743538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.743573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.743932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.743964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.744330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.744362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.744722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.744755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.745124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.745160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.745384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.745414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.745778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.745809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.746042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.746085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.746437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.746469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.746824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.746856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.747092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.747123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.747485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.747516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.747695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.747725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.748105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.748137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.748514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.748546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.748765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.748797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.749050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.749098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.749336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.749368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.749714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.749749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.750105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.750140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.750500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.750532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.750900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.750933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.751259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.751292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.751655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.751687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.752047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.752092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.752450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.752481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.752845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.752876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.753121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.753162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.753547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.753580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.753813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.753846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.754096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.754130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.754367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.754398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.754631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.754663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-10-11 12:06:33.754770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-10-11 12:06:33.754801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.170 starting I/O failed 00:29:31.170 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Write completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Write completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Write completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Write completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Read completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Write completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 Write completed with error (sct=0, sc=8) 00:29:31.171 starting I/O failed 00:29:31.171 [2024-10-11 12:06:33.755637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:31.171 [2024-10-11 12:06:33.755761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc7f5e0 (9): Bad file descriptor 00:29:31.171 [2024-10-11 12:06:33.756507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.756614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.756926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.756968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.757365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.757404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.757692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.757737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.757976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.758009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.758437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.758543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.758824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.758865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.759290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.759396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.759757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.759797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.760146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.760182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.760554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.760585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.760956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.760989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.761358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.761392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.761797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.761830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.762046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.762093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.762482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.762514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.762719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.762750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.763009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.763041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.763447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.763480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.763848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.763880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.764315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.764349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.764674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.764707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.765079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.765114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.765493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.765526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.765844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.765875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.766122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.766155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.766392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.766430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.766795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.766827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.767198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.767233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.767603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.767636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.767854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.767885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.768245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.768278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-10-11 12:06:33.768515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-10-11 12:06:33.768545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.768746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.768777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.769143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.769179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.769274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.769308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.769659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.769692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.770070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.770109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.770315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.770346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.770714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.770747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.770979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.771009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.771386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.771419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.771796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.771830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.772201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.772236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.772606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.772638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.772989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.773022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.773389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.773422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.773696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.773726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.774131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.774165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.774532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.774565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.774791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.774821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.775199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.775231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.775481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.775512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.775737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.775769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.776129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.776161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.776542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.776576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.776789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.776821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.777240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.777273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.777508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.777542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.777752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.777786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.778193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.778226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.778578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.778611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.778967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.779001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.779362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.779397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.779604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.779634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.779991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.780024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.780384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.780427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.780779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.780812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.781171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.781204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.781414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.781445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.781843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.781875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.782306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.782340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.782574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-10-11 12:06:33.782607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-10-11 12:06:33.782971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.783004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.783419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.783453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.783805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.783838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.784088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.784122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.784477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.784510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.784871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.784904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.785143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.785177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.785533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.785566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.785946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.785979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.786372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.786405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.786774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.786805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.787118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.787152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.787519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.787553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.787799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.787830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.788077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.788115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.788520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.788552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.788792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.788826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.788923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.788954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.789376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.789408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.789628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.789658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.789766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.789797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.789897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.789929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.790342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.790377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.790733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.790765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.791132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.791166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.791531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.791564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.791799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.791830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.792218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.792253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.792613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.792646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.793013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.793047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.793420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.793452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.793670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.793700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.794096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.794129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.794493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.794532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.794880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.794912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-10-11 12:06:33.795246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-10-11 12:06:33.795279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.795520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.795551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.795913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.795945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.796193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.796228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.796613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.796645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.797004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.797037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.797419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.797453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.797825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.797859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.798059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.798114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.798481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.798513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.798722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.798753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.799120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.799176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.799468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.799500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.799709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.799741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.799961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.799994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.800338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.800371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.800767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.800801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.801147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.801180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.801577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.801610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.801971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.802005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.802361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.802395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.802759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.802792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.803155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.803191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.803563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.803596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.803959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.803992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.804362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.804395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.804782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.804814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.805027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.805057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.805430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.805462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.805845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.805880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.806234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.806268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.806632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.806665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.807033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.807077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.807463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.807496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.807892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.807925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.808300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.808332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.808693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.808725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.808977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.809012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.809232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.809273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.809639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-10-11 12:06:33.809673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-10-11 12:06:33.810047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.810090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.810328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.810360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.810741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.810773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.811151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.811185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.811553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.811585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.811956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.811989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.812373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.812408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.812765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.812799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.813045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.813087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.813462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.813495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.813868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.813900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.814250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.814284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.814659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.814692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.815055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.815097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.815459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.815495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.815702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.815738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.816109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.816144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.816536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.816570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.816780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.816814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.817173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.817209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.817574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.817609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.817964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.817998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.818364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.818398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.818620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.818651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.819020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.819054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.819447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.819482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.819846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.819879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.820119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.820158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.820506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.820540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.820964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.820996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.821250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.821285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.821532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.821568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.821781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.821816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.822090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.822128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.822366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.822400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.822625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.822658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.822900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.822933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.823326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.823359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.823712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.823752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.824124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.824161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-10-11 12:06:33.824526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-10-11 12:06:33.824558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.824779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.824812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.825174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.825209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.825567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.825601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.825999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.826034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.826291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.826325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.826536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.826566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.826813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.826846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.827276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.827310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.827478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.827509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.827930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.827964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.828083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.828116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.828501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.828536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.828887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.828919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.829196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.829231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.829357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.829389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.829489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.829520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.829733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.829764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.830124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.830159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.830555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.830589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.830957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.830991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.831353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.831388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.831633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.831666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.832026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.832059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.832269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.832302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.832554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.832590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.832856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.832892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.833281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.833316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.833665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.833699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.833904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.833937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.834313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.834347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.834697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.834731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.835093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.835128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.835519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.835553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.835922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.835956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.836321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.836356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.836721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.836755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.837118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.837153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.837507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.837547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.837784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.837818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-10-11 12:06:33.838178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-10-11 12:06:33.838213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.838575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.838608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.838857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.838891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.839243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.839276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.839486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.839520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.839742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.839775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.840022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.840055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.840412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.840445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.840672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.840707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.840958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.840990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.841197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.841231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.841588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.841622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.841978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.842013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.842381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.842416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.842775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.842808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.843060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.843116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.843473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.843506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.843859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.843893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.844242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.844277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.844615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.844649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-10-11 12:06:33.845013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-10-11 12:06:33.845046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.845415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.845450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.845695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.845732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.845946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.845980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.846302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.846336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.846587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.846621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.846976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.847011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.847231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.847270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.847501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.847534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.847924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.847957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.848330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.848363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.848734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.848767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.849128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.849163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.849533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.849565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.849921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.849954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.850342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.850378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.850736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.850772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.851130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.851167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.851577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.851616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.851972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.852005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.852377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.852413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.852774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.852807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.853180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.853214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.853624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.853658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.854016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.854050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.854429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.854461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.854680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.854713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.855117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-10-11 12:06:33.855153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-10-11 12:06:33.855572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.855605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.855829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.855868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.856228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.856263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.856630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.856663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.856911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.856944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.857196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.857231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.857595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.857628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.858005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.858039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.858471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.858506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.858857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.858890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.859114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.859154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.859440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.859473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.859715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.859751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.859991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.860026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.860397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.860431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.860789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.860825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.861034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.861072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.861258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.861296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.861535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.861570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.861673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.861705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.861890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.861989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.862571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.862681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.862859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.862899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.863137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.863174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.863464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.863501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.863763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.863796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.864155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.864191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.864450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.864485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.864731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.864764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.865105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.865140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.865501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.865534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.865952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.865986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.866368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.866403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.866622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.866656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.867079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.867114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.867334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.867367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.867587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.867620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.867850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.867887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.868141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.868177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.868556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-10-11 12:06:33.868589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-10-11 12:06:33.868827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.868862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.869221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.869255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.869507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.869540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.869943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.869977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.870334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.870385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.870623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.870656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.871053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.871103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.871479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.871512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.871872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.871904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.872289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.872324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.872683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.872716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.872930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.872965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.873185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.873219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.873603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.873635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.873996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.874028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.874442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.874477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.874828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.874860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.875238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.875272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.875643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.875677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.876035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.876081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.876471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.876504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.876872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.876907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.877287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.877322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.877656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.877688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.878092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.878126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.878477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.878509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.878915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.878947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.879320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.879352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.879575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.879606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.880004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.880037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.880410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.880442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.880703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.880741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.881095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.881128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.881404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.881436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.881797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.881829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.882196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.882230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.882437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.882468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.882698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.882729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.883097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.883131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.883501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.883534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-10-11 12:06:33.883740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-10-11 12:06:33.883773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.884021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.884053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.884386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.884419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.884791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.884825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.885200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.885234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.885469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.885502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.885875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.885908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.886288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.886322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.886571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.886603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.886989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.887022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.887429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.887463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.887823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.887854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.888212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.888245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.888578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.888610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.888972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.889004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.889366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.889400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.889760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.889791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.890153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.890187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.890583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.890615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.890975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.891007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.891372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.891404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.891778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.891811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.892183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.892217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.892617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.892649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.893005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.893036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.893403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.893436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.893806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.893839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.894084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.894117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.894475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.894508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.894714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.894745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.895110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.895143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.895500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.895533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.895759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.895798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.896006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.896039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.896262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.896295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.896529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.896562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.896933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.896965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.897388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.897420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.897577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.897607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.897979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.898011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.898415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-10-11 12:06:33.898447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-10-11 12:06:33.898805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.898837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.899201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.899235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.899604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.899635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.899739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.899768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.900155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.900188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.900599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.900631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.900849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.900879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.901286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.901319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.901583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.901619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.901719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.901750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.902093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.902128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.902490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.902522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.902874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.902906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.903331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.903365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.903730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.903762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.904123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.904157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.904405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.904440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.904791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.904824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.905187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.905226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.905604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.905636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.905987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.906019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.906381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.906414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.906630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.906663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.906900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.906933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.907303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.907336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.907577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.907608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.907967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.907998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.908203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.908235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.908598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.908630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.908991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.909022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.909410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.909444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.909690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.909721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.909956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.909989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.910350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-10-11 12:06:33.910381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-10-11 12:06:33.910803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.910835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.911211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.911245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.911625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.911657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.912042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.912080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.912432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.912464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.912676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.912708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.912942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.912972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.913173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.913207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.913631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.913662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.914018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.914051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.914408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.914440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.914551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.914583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.914941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.914974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.915201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.915233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.915494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.915527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.915747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.915781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.916007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.916040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.916423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.916458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.916824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.916857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.917075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.917108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.917464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.917496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.917878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.917913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.918275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.918307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.918678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.918710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.918962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.918996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.919369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.919408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.919764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.919796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.920171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.920204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.920553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.920585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.920963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.920996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.921337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.921370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.921578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.921610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.921863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.921895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.922246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.922280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.922648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.922682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.923054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.923099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.923480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.923513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.923877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.923910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.924156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.924189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.924547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.924579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-10-11 12:06:33.924931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-10-11 12:06:33.924964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.925343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.925376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.925584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.925614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.925970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.926003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.926363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.926395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.926764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.926797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.927151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.927183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.927396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.927428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.927786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.927819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.928172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.928206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.928583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.928614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.928977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.929008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.929376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.929414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.929769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.929802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.930188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.930221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.930438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.930468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.930825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.930857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.931220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.931253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.931625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.931658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.932029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.932061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.932374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.932407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.932775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.932807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.933022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.933052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.933176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.933209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.933733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.933839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.934380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.934482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.934921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.934962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.935406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.935512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.935799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.935840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.936005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.936038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.936291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.936323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.936536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.936568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.936854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.936893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.937144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.937179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.937429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.937460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.937709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.937744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.937986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.938019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.938416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.938450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.938668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.938699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.939093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.939140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.939518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.939552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-10-11 12:06:33.939923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-10-11 12:06:33.939955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.940330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.940365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.940723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.940756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.941117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.941151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.941523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.941556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.941910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.941942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.942175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.942208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.942327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.942361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.942707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.942739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.943093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.943126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.943491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.943524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.943883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.943916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.944012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.944041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.944417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.944451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.944808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.944841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.945091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.945124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.945481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.945518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.945874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.945907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.946282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.946315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.946571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.946604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.946973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.947005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.947398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.947431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.947797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.947830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.948192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.948230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.948596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.948628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.948994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.949028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.949421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.949455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.949815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.949848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.950220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.950253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.950610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.950642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.950888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.950920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.951292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.951325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.951675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.951708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.952078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.952113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.952477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.952516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.952878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.952910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.953286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.953320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.953673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.953707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.954079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.954119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-10-11 12:06:33.954492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-10-11 12:06:33.954525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.954775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.954808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.955184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.955217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.955592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.955625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.955984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.956017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.956391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.956425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.956634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.956666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.956905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.956936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.957243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.957277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.957652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.957685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.958057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.958099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.958427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.958460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.958817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.958849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.959217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.959251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.959614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.959645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.960014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.960046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.960412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.960445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.960806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.960839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.961074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.961107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.961467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.961500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.961889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.961921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.962292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.962328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.962689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.962721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.962942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.962974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.963326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.963360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.963725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.963757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.964116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.964150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.964376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.964410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.964777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.964811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.965159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.965192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.965554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.965587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.965955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.965987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.966204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.966236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.966606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.966639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.966850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.966881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.967244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.967277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.967644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.967679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.968043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.968096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-10-11 12:06:33.968446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-10-11 12:06:33.968478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.968686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.968729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.969091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.969125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.969488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.969520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.969760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.969793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.970044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.970089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.970342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.970373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.970589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.970622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.970984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.971016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.971452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.971486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.971887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.971920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.972133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.972167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.972382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.972415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.972667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.972699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.972934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.972965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.973210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.973244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.973489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.973520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.973904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.973936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.974157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.974189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.974602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.974634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.975017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.975049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.975420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.975452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.975823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.975857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.976226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.976261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.976609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.976641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.976996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.977030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.977387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.977421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.977798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.977829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.978195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.978229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.978603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.978635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.978985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.979015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.979158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.979194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.979519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.979552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.979792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.979822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.980194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.980227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.980590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.980623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.980990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-10-11 12:06:33.981021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-10-11 12:06:33.981423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.981456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.981818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.981852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.982100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.982134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.982489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.982523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.982890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.982930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.983302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.983334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.983695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.983727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.984099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.984134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.984495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.984528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.984890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.984923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.985317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.985352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.985716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.985750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.986133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.986167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.986533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.986565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.986768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.986798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.987003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.987035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.987377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.987411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.987768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.987801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.988180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.988216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.988580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.988612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.988971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.989004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.989372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.989406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.989764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.989798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.990149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.990182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.990543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.990576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.990947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.990979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.991329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.991361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.991712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.991743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.992104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.992137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.992361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.992392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.992754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.992787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.993166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.993200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.993454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.993487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.993840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.993874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.994243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.994276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.994648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.994682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.994897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.994930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.995136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.995169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.995559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.995592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-10-11 12:06:33.995953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-10-11 12:06:33.995985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.996179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.996212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.996599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.996632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.996987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.997023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.997421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.997457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.997824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.997866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.998229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.998263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.998479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.998510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.998895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.998927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.999263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.999297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.999672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.999703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:33.999909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:33.999941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.000155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.000191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.000420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.000455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.000810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.000843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.001216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.001248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.001454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.001485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.001843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.001876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.002251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.002283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.002641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.002675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.003040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.003077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.003450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.003482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.003695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.003726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.004099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.004132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.004335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.004368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.004737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.004769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.005133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.005167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.005524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.005558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.005769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.005802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.006146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.006180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.006444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.006477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.006712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.006743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.007112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.007174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.007550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.007582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.007949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.007983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.008323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.008357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.008712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.008745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-10-11 12:06:34.009123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-10-11 12:06:34.009157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.009408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.009443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.009797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.009829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.010186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.010218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.010436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.010469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.010866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.010899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.011349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.011382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.011601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.011633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.012003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.012034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.012417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.012451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.012782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.012816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.013150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.013185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.013586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.013618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.013977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.014014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.014373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.014407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.014759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.014793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.015185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.015219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.015427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.015458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.015823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.015856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.016230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.016263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.016632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.016666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.017032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.017074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.017305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.017337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.017707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.017740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.018101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.018134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.018380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.018413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.018653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.018685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.018942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.018974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.019242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.019276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.019682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.019715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.019960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.019992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.020386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.020422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.020788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.020820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.021072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.021110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.021352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.021386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.021599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.021636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.022011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.022043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.022408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.022442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.022662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.022697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-10-11 12:06:34.022922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-10-11 12:06:34.022955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.023200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.023235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.023592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.023627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.024001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.024034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.024410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.024444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.024818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.024852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.025199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.025233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.025602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.025637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.025878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.025910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.026230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.026264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.026610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.026643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.026875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.026906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.027114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.027148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.027503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.027537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.027788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.027821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.028185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.028218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.028314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.028345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.028888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.028993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.029230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.029278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.029680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.029717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.030087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.030120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.030624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.030730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb028000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.031116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.031151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.031531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.031564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.031939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.031971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.032334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.032367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.032723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.032756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.033124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.033157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.033543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.033576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.033938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.033970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.034207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.034238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.034567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.034599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.034958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.034990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.035348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.035380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.035747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.035779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-10-11 12:06:34.036145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-10-11 12:06:34.036179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.036535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.036576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.036785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.036818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.037184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.037217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.037587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.037618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.037989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.038021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.038276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.038310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.038679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.038712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.038954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.038985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.039191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.039225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.039603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.039636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.039986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.040019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.040322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.040355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.040711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.040745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.041120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.041153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.041533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.041565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.041778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.041809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.042224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.042257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.042473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.042504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.042748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.042782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.043168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.043202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.043439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.043470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.043824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.043858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.044094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.044127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.044508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.044541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.044799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.044833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.045094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.045128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.045265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.045297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.045685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.045719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.046089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.046123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.046452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.046484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.046841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.046874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.047095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.047128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.047474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.047506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.047599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.047628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb02c000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.048089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.048205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.048378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.048415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.048663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.048697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.048988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.049025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.049291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.049328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-10-11 12:06:34.049529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-10-11 12:06:34.049560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.049766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.049811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.050169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.050203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.050431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.050463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.050873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.050907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.051337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.051371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.051730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.051762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.052141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.052174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.052410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.052441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.052801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.052834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.053045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.053085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.053453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.053486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.053842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.053873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.054248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.054283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.054660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.054696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.054914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.054946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.055359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.055393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.055648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.055680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.056087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.056121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.056438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.056470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.056691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.056723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.056967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.056997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.057145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.057176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.057556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.057588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.058006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.058038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.058279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.058311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.058553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.058584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.058951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.058984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.059360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.059394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.059790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.059822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.060185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.060220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.060599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.060633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.060895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.060928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.061381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.061416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.061766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.061800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.062173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.062208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.062440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.062476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.062851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.062886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.063248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.063284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.063646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.063680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.063906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.063941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-10-11 12:06:34.064324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-10-11 12:06:34.064359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.064712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.064752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.064967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.065001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.065356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.065391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.065628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.065662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.066037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.066081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.066468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.066500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.066883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.066915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.067288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.067323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.067573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.067604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.067972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.068006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.068251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.068288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.068683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.068716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.069104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.069140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.069390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.069424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.069830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.069865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.070211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.070245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.070610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.070644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.071018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.071054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.071421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.071455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.071823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.071857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.072231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.072267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.072512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.072548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.072908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.072941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.073302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.073339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.073555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.073587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.073842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.073878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.074085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.074120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.074376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.074416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.074773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.074808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.075046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.075089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.075460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.075494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.075857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.075892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.076139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.076176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.076538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.076571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.076927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.076960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.077289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.077324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.077568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.077602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.077967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.078002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.078377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.078413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.078781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.078814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-10-11 12:06:34.079201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-10-11 12:06:34.079236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.079590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.079625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.079850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.079884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.080128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.080162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.080524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.080555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.080919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.080952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.081164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.081197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.081576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.081608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.081966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.082000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.082359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.082394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.082754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.082787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.082996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.083027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.083443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.083477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.083757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.083791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.084202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.084236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.084627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.084661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.085027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.085060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.085462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.085494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.085870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.085903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.086289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.086324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.086691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.086723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.087088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.087123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.087333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.087369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.087767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.087801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.088177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.088212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.088583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.088617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.088897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.088931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.089173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.089208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.089446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.089488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.089850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.089882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.090253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.090286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.090491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.090524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.090767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.090803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.091153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.091188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.091548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.091582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.091958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.091992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.467 [2024-10-11 12:06:34.092351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.467 [2024-10-11 12:06:34.092386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.467 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.092597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.092631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.092991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.093024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.093419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.093453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.093830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.093863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.094234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.094270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.094674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.094709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.095085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.095121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.095381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.095415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.095774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.095807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.096096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.096133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.096378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.096412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.096628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.096661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.097046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.097096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.097331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.097364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.097745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.097779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.098151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.098184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.098591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.098624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.098872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.098904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.099240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.099279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.099520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.099553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.099759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.099790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.100001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.100034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.100256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.100290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.100602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.100635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.101002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.101034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.101413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.101448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.101673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.101706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.102009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.102042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.102449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.102482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.102870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.102903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.103287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.103321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.103563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.103597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.103835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.103867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.104228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.104260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.104636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.104670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.105035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.105081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.105436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.105467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.105676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.105709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.106076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.106110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.106475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.468 [2024-10-11 12:06:34.106507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.468 qpair failed and we were unable to recover it. 00:29:31.468 [2024-10-11 12:06:34.106890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.106922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.107132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.107165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.107527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.107561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.107935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.107969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.108219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.108254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.108618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.108652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.109090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.109123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.109361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.109396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.109746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.109780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.110149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.110184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.110561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.110596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.110935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.110969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.111310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.111348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.111813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.111848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.112176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.112211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.112578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.112611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.112970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.113003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.113419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.113457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.113812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.113845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.114203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.114245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.114598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.114633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.114877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.114910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.115447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.115481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.115808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.115842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.116218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.116251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.116470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.116502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.116750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.116782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.117028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.117061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.117298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.117330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.117679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.117713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.118084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.118121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.118281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.118316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.118710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.118743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.119149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.119183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.119586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.119621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.119978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.120009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.120413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.120449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.120850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.120884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.469 qpair failed and we were unable to recover it. 00:29:31.469 [2024-10-11 12:06:34.121136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.469 [2024-10-11 12:06:34.121171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.121441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.121478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.121700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.121734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.121955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.121990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.122379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.122414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.122635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.122670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.122959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.122992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.123357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.123392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.123738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.123781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.124053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.124096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.124479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.124512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.124885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.124917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.125270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.125306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.125671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.125705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.125953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.125987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.126360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.126394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.126752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.126785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.127158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.127194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.127582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.127617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.127970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.128005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.128369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.128403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.128773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.128807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.129070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.129104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.129380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.129414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.129798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.129830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.130192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.130226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.130438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.130473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.130843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.130876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.131240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.131273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.131637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.131671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.132044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.132098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.132459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.132492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.132843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.132879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.133239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.133274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.133640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.133674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.134040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.134081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.470 [2024-10-11 12:06:34.134472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.470 [2024-10-11 12:06:34.134506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.470 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.134864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.134898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.135286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.135320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.135678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.135713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.135963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.135997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.136360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.136394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.136750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.136783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.137146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.137180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.137434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.137470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.137713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.137745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.138004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.138036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.138148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.138182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.138530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.138564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.138768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.138807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.139172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.139205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.139560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.139594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.139967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.140001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.140276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.140310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.140683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.140718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.141091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.141127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.141479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.141511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.141881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.141915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.142296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.142330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.142714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.142746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.143106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.143142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.143501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.143536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.471 [2024-10-11 12:06:34.143895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.471 [2024-10-11 12:06:34.143929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.471 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.144182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.144220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.144578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.144615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.144981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.145016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.145404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.145439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.145646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.145680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.146049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.146091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.146474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.146511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.146873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.146908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.147287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.147322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.147527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.147560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.147812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.147848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.148215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.148249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.148471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.148504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-10-11 12:06:34.148753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-10-11 12:06:34.148788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.149162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.149196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.149552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.149584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.149957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.149991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.150357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.150392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.150617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.150653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.150863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.150897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.150991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.151023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.151370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.151404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.151615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.151648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.152022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.152055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.152329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.152364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.152613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.152646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.153005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.153040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.153306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.153341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.153745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.153779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.154007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.154043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.154507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.154545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.154775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.154808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.155176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.155211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.155573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.155607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.155968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.156008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.156419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.156455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.156803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.156837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.157207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.157241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.157645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.157679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.158019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.158052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.158429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.158462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.158836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.158870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.159123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.159159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.159522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.159556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.159770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.159803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.160206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.160240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.160610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.160644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.160853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.160886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.161296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.161331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.161573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.161607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.161854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.161886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.162249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.162283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.162524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.162561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.162914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-10-11 12:06:34.162947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-10-11 12:06:34.163315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.163356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.163680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.163715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.164101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.164136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.164598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.164631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.164993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.165026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.165446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.165480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.165688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.165721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.165959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.165992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.166379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.166413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.166770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.166802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.167038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.167088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.167461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.167494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.167740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.167773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.167987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.168021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.168166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.168204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.168450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.168485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.168705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.168737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.168948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.168982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.169223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.169259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.169486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.169521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.169882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.169914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.170285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.170319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.170694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.170728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.171082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.171116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.171409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.171442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.171689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.171722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.171812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.171843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.172275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.172402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb034000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.172732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.172774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb034000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.173026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.173061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb034000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.173531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.173640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb034000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.174095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.174140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb034000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.174530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.174564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb034000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.174929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.174962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb034000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.175472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.175581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb034000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.175970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.176006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.176425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.176458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.176821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.176854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.177356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.177465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.177905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.177945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-10-11 12:06:34.178311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-10-11 12:06:34.178347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.178601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.178635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.178996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.179031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.179445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.179479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.179853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.179885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.180249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.180283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.180698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.180730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.181106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.181142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.181502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.181535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.181894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.181927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.182268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.182302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.182664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.182697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.183071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.183106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.183476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.183508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.183868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.183899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.184262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.184295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.184512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.184544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.184904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.184936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.185177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.185211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.185464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.185498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.185705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.185736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.186097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.186131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.186364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.186396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.186696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.186730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.187091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.187124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.187500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.187533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.187919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.187950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.188344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.188377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.188791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.188829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.189056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.189100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.189468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.189499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.189771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.189819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.190033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.190075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.190502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.190534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.190700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.190730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.190964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.190997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.191249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.191281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.191511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.191546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.191916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.191949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.192341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.192373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.192743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-10-11 12:06:34.192775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-10-11 12:06:34.193033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.193075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.193472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.193505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.193871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.193904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.194244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.194277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.194667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.194699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.194919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.194950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.195318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.195352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.195742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.195774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.196044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.196095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.745 [2024-10-11 12:06:34.196473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.196505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:31.745 [2024-10-11 12:06:34.196863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.196894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:31.745 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.745 [2024-10-11 12:06:34.197257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.197293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.745 [2024-10-11 12:06:34.197640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.197680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.198034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.198076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.198424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.198457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.198821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.198855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.199080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.199112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.199482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.199515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.199871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.199904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.200290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.200324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.200756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.200789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.201246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.201279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.201635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.201667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.202025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.202059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.202439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.202478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.202829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.202862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.203235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.203269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.203478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.203508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.203869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.203901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.204273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.204307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.204693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.204726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.204945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.204975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.205198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.205231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.205601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.205634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-10-11 12:06:34.205862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-10-11 12:06:34.205894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.206154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.206186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.206421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.206457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.206838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.206873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.207281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.207315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.207701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.207733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.208109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.208144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.208514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.208548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.208952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.208985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.209230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.209262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.209620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.209652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.209856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.209886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.210138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.210171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.210541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.210572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.210924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.210957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.211184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.211219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.211580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.211613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.211967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.212002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.212364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.212397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.212795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.212831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.213087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.213120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.213503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.213536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.213885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.213918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.214253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.214287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.214683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.214718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.215083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.215117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.215429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.215460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.215867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.215900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.216154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.216187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.216556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.216590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.216953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.216988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.217374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.217409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.217613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.217646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.217991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.218027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.218425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.218460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.218859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.218892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.219257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.219291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.219534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.219566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.219937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.219971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.220077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-10-11 12:06:34.220109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-10-11 12:06:34.220503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.220534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.220899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.220933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.221149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.221182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.221427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.221460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.221680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.221710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.222101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.222135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.222407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.222448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.222813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.222845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.223219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.223253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.223608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.223642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.224013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.224047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.224441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.224474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.224837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.224868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.225228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.225262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.225469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.225503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.225849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.225883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.226125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.226158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.226551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.226584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.226937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.226971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.227306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.227339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.227713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.227746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.227972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.228003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.228380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.228415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.228818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.228850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.229205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.229239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.229601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.229636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.229985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.230018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.230412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.230448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.230888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.230920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.231293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.231328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.231685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.231718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.231958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.231988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.232417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.232451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.232804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.232838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.233096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.233131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.233597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.233630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.233982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.234016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.234410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.234444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.234675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.234705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.234963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-10-11 12:06:34.235011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-10-11 12:06:34.235197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.235229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.235445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.235478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.235732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.235763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.236181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.236215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.236629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.236662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.237017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.237048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.237445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.237480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.237858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.237896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.238107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.238139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.238510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.238543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.238756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.238787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.239148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.239183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.239422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.239457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.239672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.239705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.239966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.239997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.748 [2024-10-11 12:06:34.240384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.240420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.748 [2024-10-11 12:06:34.240785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.240820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.748 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.748 [2024-10-11 12:06:34.241187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.241222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.241591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.241623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.241998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.242030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.242394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.242430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.242787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.242820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.243194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.243228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.243446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.243478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.243840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.243872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.244234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.244268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.244628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.244659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.245027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.245059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.245405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.245438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.245674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.245705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.246098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.246132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.246482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.246515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.246853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.246892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.247330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.247363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.247716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.247748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.248119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.248153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.248523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.248555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-10-11 12:06:34.248924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-10-11 12:06:34.248956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.249327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.249361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.249764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.249795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.250037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.250074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.250420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.250452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.250803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.250834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.251205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.251239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.251602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.251633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.251942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.251975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.252360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.252393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.252752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.252785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.253163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.253198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.253567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.253598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.253849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.253880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.254259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.254292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.254655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.254686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.254917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.254948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.255206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.255239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.255369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.255398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.255610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.255641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.255884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.255915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.256303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.256337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.256560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.256590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.256967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.256999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.257373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.257406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.257765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.257798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.258043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.258089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.258455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.258487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.258854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.258885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.259198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.259231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.259597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.259629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.260057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.260099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.260234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.260269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.260510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.260541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.260795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.260830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.260925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.260954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.261185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.261227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.261484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.261517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.261746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.261777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.262137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.262170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.262572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.262604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-10-11 12:06:34.262961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-10-11 12:06:34.262994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.263359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.263392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.263761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.263792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.264143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.264176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.264538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.264571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.264928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.264958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.265324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.265356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.265725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.265757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.266119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.266151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.266526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.266559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.266769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.266800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.267149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.267184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.267433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.267465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.267684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.267715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.267976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.268011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.268255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.268288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.268699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.268731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.268982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.269015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.269231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.269266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.269587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.269619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.269974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.270006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.270372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.270407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.270770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.270810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.271022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.271055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.271444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.271477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.271852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.271885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.272258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.272291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.272651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.272682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.273050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.273090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.273446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.273478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.273690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.273722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.274105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.274138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.274541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.274573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.274930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.274963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.275199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.275230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.275464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.275496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.275730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-10-11 12:06:34.275763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-10-11 12:06:34.276091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.276125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 Malloc0 00:29:31.751 [2024-10-11 12:06:34.276490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.276522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.276654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.276683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.277050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.277091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.751 [2024-10-11 12:06:34.277422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.277454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:31.751 [2024-10-11 12:06:34.277822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.277855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.751 [2024-10-11 12:06:34.278216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.278249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.278611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.278643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.278853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.278883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.279242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.279276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.279660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.279691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.279934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.279965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.280178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.280211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.280520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.280551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.280788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.280819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.281024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.281055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.281415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.281448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.281818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.281850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.282219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.282253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.282612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.282644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.283013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.283045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.283328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.283361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.283748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.283746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.751 [2024-10-11 12:06:34.283780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.284154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.284187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.284577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.284610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.284971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.285002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.285237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.285269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.285516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.285548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.285911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.285942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.286317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.286352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.286730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.286762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.287122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.287155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.287555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.287586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.287957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.288009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.288397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.288457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.288866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.288929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.289175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.289209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.289617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.289647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.290016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-10-11 12:06:34.290047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-10-11 12:06:34.290322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.290353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.290617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.290646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.291052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.291112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.291492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.291523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.291888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.291916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.292288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.292323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.292570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.292603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.752 [2024-10-11 12:06:34.292985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.293015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.752 [2024-10-11 12:06:34.293286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.293317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.293559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.752 [2024-10-11 12:06:34.293590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.752 [2024-10-11 12:06:34.293935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.293972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.294356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.294387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.294786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.294815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.295185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.295218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.295458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.295487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.295852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.295883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.296146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.296177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.296445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.296475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.296601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.296630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.296760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.296794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.297034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.297090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.297342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.297371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.297591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.297620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.297836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.297865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.298151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.298188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.298576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.298607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.298964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.298994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.299232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.299262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.299682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.299712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.299931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.299960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.300259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.300290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.300661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.300690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.300919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.300949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.301253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.301284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.301651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.301681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.302047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.302100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.302504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.302534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.302773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.302802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-10-11 12:06:34.303183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-10-11 12:06:34.303215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.303589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.303618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.303985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.304014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.304456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.304488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.304778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.304808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.753 [2024-10-11 12:06:34.305188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.305219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.753 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.753 [2024-10-11 12:06:34.305645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.305675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.753 [2024-10-11 12:06:34.305918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.305949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.306174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.306204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.306582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.306615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.306843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.306872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.307137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.307179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.307582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.307615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.307842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.307876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.308123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.308156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.308499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.308530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.308743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.308777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.309137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.309170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.309534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.309568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.309741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.309774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.310146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.310181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.310556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.310589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.310791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.310824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.311194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.311228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.311448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.311480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.311728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.311762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.312101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.312136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.312369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.312404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.312775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.312807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.313036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.313085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.313484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.313518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.313875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.313910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.314276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.314313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.314687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.314720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.314933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.314965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.315338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.315375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.315739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.315773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.316149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.316183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-10-11 12:06:34.316571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.316606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.753 [2024-10-11 12:06:34.316966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-10-11 12:06:34.317001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.317253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.317288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.754 [2024-10-11 12:06:34.317640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.317674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.754 [2024-10-11 12:06:34.318031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.318102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.318446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.318480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.318850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.318884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.319281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.319316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.319669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.319703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.320101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.320135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.320496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.320528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.320900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.320932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.321318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.321357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.321712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.321746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.321995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.322027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.322424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.322459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.322821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.322854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.323229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.323264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.323503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.323535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.323889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-10-11 12:06:34.323924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc817c0 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.324160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.754 [2024-10-11 12:06:34.335047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.754 [2024-10-11 12:06:34.335177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.754 [2024-10-11 12:06:34.335225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.754 [2024-10-11 12:06:34.335247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.754 [2024-10-11 12:06:34.335266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.754 [2024-10-11 12:06:34.335311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.754 12:06:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2116727 00:29:31.754 [2024-10-11 12:06:34.344904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.754 [2024-10-11 12:06:34.344982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.754 [2024-10-11 12:06:34.345013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.754 [2024-10-11 12:06:34.345027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.754 [2024-10-11 12:06:34.345039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.754 [2024-10-11 12:06:34.345074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.354857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.754 [2024-10-11 12:06:34.354932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.754 [2024-10-11 12:06:34.354954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.754 [2024-10-11 12:06:34.354963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.754 [2024-10-11 12:06:34.354973] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.754 [2024-10-11 12:06:34.354992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.364922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.754 [2024-10-11 12:06:34.365003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.754 [2024-10-11 12:06:34.365022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.754 [2024-10-11 12:06:34.365030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.754 [2024-10-11 12:06:34.365037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.754 [2024-10-11 12:06:34.365055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-10-11 12:06:34.374903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.754 [2024-10-11 12:06:34.374978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.754 [2024-10-11 12:06:34.374996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.754 [2024-10-11 12:06:34.375004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.754 [2024-10-11 12:06:34.375011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.755 [2024-10-11 12:06:34.375028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-10-11 12:06:34.384861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.755 [2024-10-11 12:06:34.384928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.755 [2024-10-11 12:06:34.384952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.755 [2024-10-11 12:06:34.384961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.755 [2024-10-11 12:06:34.384968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.755 [2024-10-11 12:06:34.384986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-10-11 12:06:34.394894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.755 [2024-10-11 12:06:34.395015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.755 [2024-10-11 12:06:34.395034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.755 [2024-10-11 12:06:34.395043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.755 [2024-10-11 12:06:34.395051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.755 [2024-10-11 12:06:34.395072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-10-11 12:06:34.404945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.755 [2024-10-11 12:06:34.405023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.755 [2024-10-11 12:06:34.405042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.755 [2024-10-11 12:06:34.405050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.755 [2024-10-11 12:06:34.405057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.755 [2024-10-11 12:06:34.405082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-10-11 12:06:34.415028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.755 [2024-10-11 12:06:34.415113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.755 [2024-10-11 12:06:34.415132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.755 [2024-10-11 12:06:34.415141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.755 [2024-10-11 12:06:34.415148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.755 [2024-10-11 12:06:34.415166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-10-11 12:06:34.425023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.755 [2024-10-11 12:06:34.425089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.755 [2024-10-11 12:06:34.425108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.755 [2024-10-11 12:06:34.425117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.755 [2024-10-11 12:06:34.425124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.755 [2024-10-11 12:06:34.425140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-10-11 12:06:34.435041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.755 [2024-10-11 12:06:34.435113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.755 [2024-10-11 12:06:34.435132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.755 [2024-10-11 12:06:34.435140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.755 [2024-10-11 12:06:34.435147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:31.755 [2024-10-11 12:06:34.435164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.755 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.445026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.445103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.445124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.445132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.445140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.445157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.455126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.455196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.455214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.455222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.455229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.455246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.465109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.465185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.465204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.465212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.465219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.465236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.475029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.475103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.475126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.475135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.475145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.475162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.485176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.485241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.485260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.485269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.485276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.485292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.495252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.495320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.495337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.495345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.495353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.495369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.505204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.505275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.505293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.505301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.505308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.505323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.515370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.515445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.515462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.515471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.515478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.515500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.525342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.525411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.525430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.525438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.525445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.525461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.535411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.535489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.535506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.535514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.535521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.535537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.545413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.545477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.545495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.545504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.545511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.545526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.555380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.555446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.555463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.555472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.555479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.555496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.565429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.565498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.019 [2024-10-11 12:06:34.565521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.019 [2024-10-11 12:06:34.565529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.019 [2024-10-11 12:06:34.565536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.019 [2024-10-11 12:06:34.565554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.019 qpair failed and we were unable to recover it. 00:29:32.019 [2024-10-11 12:06:34.575510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.019 [2024-10-11 12:06:34.575577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.575602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.575614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.575622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.575643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.585397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.585477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.585500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.585509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.585518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.585535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.595399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.595517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.595538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.595547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.595555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.595572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.605568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.605635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.605654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.605662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.605669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.605692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.615579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.615648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.615666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.615674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.615681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.615698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.625605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.625670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.625688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.625697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.625704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.625721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.635647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.635717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.635735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.635743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.635750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.635766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.645679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.645752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.645769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.645778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.645785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.645801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.655780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.655861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.655888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.655897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.655905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.655921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.665711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.665778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.665796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.665804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.665811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.665827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.675724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.675788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.675808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.675816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.675823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.675839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.685793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.685865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.685883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.685891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.685898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.685914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.695723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.695798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.695818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.695831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.695838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.695861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.705851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.705930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.705948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.705956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.020 [2024-10-11 12:06:34.705964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.020 [2024-10-11 12:06:34.705980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.020 qpair failed and we were unable to recover it. 00:29:32.020 [2024-10-11 12:06:34.715922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.020 [2024-10-11 12:06:34.716026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.020 [2024-10-11 12:06:34.716044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.020 [2024-10-11 12:06:34.716053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.021 [2024-10-11 12:06:34.716060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.021 [2024-10-11 12:06:34.716082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.021 qpair failed and we were unable to recover it. 00:29:32.284 [2024-10-11 12:06:34.725820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.284 [2024-10-11 12:06:34.725899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.284 [2024-10-11 12:06:34.725918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.284 [2024-10-11 12:06:34.725926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.284 [2024-10-11 12:06:34.725933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.284 [2024-10-11 12:06:34.725950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.284 qpair failed and we were unable to recover it. 00:29:32.284 [2024-10-11 12:06:34.735889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.284 [2024-10-11 12:06:34.735966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.284 [2024-10-11 12:06:34.735988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.284 [2024-10-11 12:06:34.735998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.284 [2024-10-11 12:06:34.736005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.284 [2024-10-11 12:06:34.736024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.284 qpair failed and we were unable to recover it. 00:29:32.284 [2024-10-11 12:06:34.746005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.284 [2024-10-11 12:06:34.746076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.284 [2024-10-11 12:06:34.746101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.284 [2024-10-11 12:06:34.746110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.284 [2024-10-11 12:06:34.746117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.284 [2024-10-11 12:06:34.746135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.284 qpair failed and we were unable to recover it. 00:29:32.284 [2024-10-11 12:06:34.755998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.284 [2024-10-11 12:06:34.756057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.284 [2024-10-11 12:06:34.756081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.284 [2024-10-11 12:06:34.756090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.284 [2024-10-11 12:06:34.756097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.284 [2024-10-11 12:06:34.756113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.284 qpair failed and we were unable to recover it. 00:29:32.284 [2024-10-11 12:06:34.766092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.284 [2024-10-11 12:06:34.766163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.284 [2024-10-11 12:06:34.766185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.284 [2024-10-11 12:06:34.766193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.284 [2024-10-11 12:06:34.766200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.284 [2024-10-11 12:06:34.766218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.284 qpair failed and we were unable to recover it. 00:29:32.284 [2024-10-11 12:06:34.776097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.284 [2024-10-11 12:06:34.776169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.284 [2024-10-11 12:06:34.776186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.284 [2024-10-11 12:06:34.776195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.776202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.776218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.786059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.786134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.786152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.786160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.786167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.786189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.795963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.796034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.796052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.796060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.796071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.796087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.806175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.806277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.806296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.806304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.806311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.806329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.816187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.816260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.816278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.816287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.816294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.816310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.826180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.826253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.826272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.826281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.826287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.826303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.836148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.836210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.836234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.836242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.836248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.836264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.846233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.846338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.846357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.846365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.846373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.846391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.856181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.856252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.856270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.856278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.856286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.856302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.866306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.866368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.866386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.866394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.866401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.866417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.876201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.876261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.876279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.876287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.876294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.876316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.886279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.886349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.886368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.886376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.886383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.886398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.896474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.896579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.896598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.896606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.896613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.896629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.906418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.906485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.906504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.906513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.906520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.906536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.285 qpair failed and we were unable to recover it. 00:29:32.285 [2024-10-11 12:06:34.916433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.285 [2024-10-11 12:06:34.916494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.285 [2024-10-11 12:06:34.916512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.285 [2024-10-11 12:06:34.916520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.285 [2024-10-11 12:06:34.916528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.285 [2024-10-11 12:06:34.916545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.286 qpair failed and we were unable to recover it. 00:29:32.286 [2024-10-11 12:06:34.926483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.286 [2024-10-11 12:06:34.926553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.286 [2024-10-11 12:06:34.926577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.286 [2024-10-11 12:06:34.926586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.286 [2024-10-11 12:06:34.926593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.286 [2024-10-11 12:06:34.926610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.286 qpair failed and we were unable to recover it. 00:29:32.286 [2024-10-11 12:06:34.936538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.286 [2024-10-11 12:06:34.936615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.286 [2024-10-11 12:06:34.936634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.286 [2024-10-11 12:06:34.936642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.286 [2024-10-11 12:06:34.936649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.286 [2024-10-11 12:06:34.936666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.286 qpair failed and we were unable to recover it. 00:29:32.286 [2024-10-11 12:06:34.946517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.286 [2024-10-11 12:06:34.946584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.286 [2024-10-11 12:06:34.946607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.286 [2024-10-11 12:06:34.946618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.286 [2024-10-11 12:06:34.946626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.286 [2024-10-11 12:06:34.946645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.286 qpair failed and we were unable to recover it. 00:29:32.286 [2024-10-11 12:06:34.956541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.286 [2024-10-11 12:06:34.956607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.286 [2024-10-11 12:06:34.956629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.286 [2024-10-11 12:06:34.956638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.286 [2024-10-11 12:06:34.956648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.286 [2024-10-11 12:06:34.956668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.286 qpair failed and we were unable to recover it. 00:29:32.286 [2024-10-11 12:06:34.966616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.286 [2024-10-11 12:06:34.966693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.286 [2024-10-11 12:06:34.966713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.286 [2024-10-11 12:06:34.966722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.286 [2024-10-11 12:06:34.966737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.286 [2024-10-11 12:06:34.966756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.286 qpair failed and we were unable to recover it. 00:29:32.286 [2024-10-11 12:06:34.976608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.286 [2024-10-11 12:06:34.976678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.286 [2024-10-11 12:06:34.976697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.286 [2024-10-11 12:06:34.976706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.286 [2024-10-11 12:06:34.976713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.286 [2024-10-11 12:06:34.976729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.286 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:34.986634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:34.986714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:34.986734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:34.986743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:34.986752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:34.986770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:34.996528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:34.996612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:34.996630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:34.996639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:34.996648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:34.996664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:35.006736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:35.006817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:35.006836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:35.006844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:35.006852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:35.006869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:35.016789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:35.016869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:35.016887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:35.016896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:35.016903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:35.016919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:35.026755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:35.026827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:35.026867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:35.026877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:35.026884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:35.026910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:35.036680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:35.036743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:35.036767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:35.036775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:35.036783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:35.036801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:35.046847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:35.046961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:35.046987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:35.046997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:35.047004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:35.047023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:35.056885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:35.056959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:35.056979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:35.056987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:35.057001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:35.057022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:35.066861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:35.066931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:35.066950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:35.066959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:35.066966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.549 [2024-10-11 12:06:35.066984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.549 qpair failed and we were unable to recover it. 00:29:32.549 [2024-10-11 12:06:35.076886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.549 [2024-10-11 12:06:35.076967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.549 [2024-10-11 12:06:35.076986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.549 [2024-10-11 12:06:35.076995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.549 [2024-10-11 12:06:35.077004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.077022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.086827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.086897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.086916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.086924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.086931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.086948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.097008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.097115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.097135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.097144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.097151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.097168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.106876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.106954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.106972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.106981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.106988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.107006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.117026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.117098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.117118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.117126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.117134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.117152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.127081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.127160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.127179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.127188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.127195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.127212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.137128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.137201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.137220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.137228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.137235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.137252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.147126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.147191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.147208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.147217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.147230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.147247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.157149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.157207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.157224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.157232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.157239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.157255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.167088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.167153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.167172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.167181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.167188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.167205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.177257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.177336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.177356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.177364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.177371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.177388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.187237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.187303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.187321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.187330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.187338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.187355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.197260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.197330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.197348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.197357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.197364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.197380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.207304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.207409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.207429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.207438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.207445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.207463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.550 [2024-10-11 12:06:35.217372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.550 [2024-10-11 12:06:35.217438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.550 [2024-10-11 12:06:35.217456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.550 [2024-10-11 12:06:35.217464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.550 [2024-10-11 12:06:35.217472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.550 [2024-10-11 12:06:35.217488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.550 qpair failed and we were unable to recover it. 00:29:32.551 [2024-10-11 12:06:35.227346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.551 [2024-10-11 12:06:35.227404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.551 [2024-10-11 12:06:35.227423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.551 [2024-10-11 12:06:35.227431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.551 [2024-10-11 12:06:35.227438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.551 [2024-10-11 12:06:35.227455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.551 qpair failed and we were unable to recover it. 00:29:32.551 [2024-10-11 12:06:35.237434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.551 [2024-10-11 12:06:35.237492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.551 [2024-10-11 12:06:35.237510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.551 [2024-10-11 12:06:35.237518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.551 [2024-10-11 12:06:35.237531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.551 [2024-10-11 12:06:35.237547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.551 qpair failed and we were unable to recover it. 00:29:32.551 [2024-10-11 12:06:35.247454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.551 [2024-10-11 12:06:35.247569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.551 [2024-10-11 12:06:35.247589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.551 [2024-10-11 12:06:35.247598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.551 [2024-10-11 12:06:35.247605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.551 [2024-10-11 12:06:35.247623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.551 qpair failed and we were unable to recover it. 00:29:32.813 [2024-10-11 12:06:35.257469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.813 [2024-10-11 12:06:35.257550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.813 [2024-10-11 12:06:35.257568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.813 [2024-10-11 12:06:35.257577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.813 [2024-10-11 12:06:35.257584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.813 [2024-10-11 12:06:35.257600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.813 qpair failed and we were unable to recover it. 00:29:32.813 [2024-10-11 12:06:35.267497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.813 [2024-10-11 12:06:35.267557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.267575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.267584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.267591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.267608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.277517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.277584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.277603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.277612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.277619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.277635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.287572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.287644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.287662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.287671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.287679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.287695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.297594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.297680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.297699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.297707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.297715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.297732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.307488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.307545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.307563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.307571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.307579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.307595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.317522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.317583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.317601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.317609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.317617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.317633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.327646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.327709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.327727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.327735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.327749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.327765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.337754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.337834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.337852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.337860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.337867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.337884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.347636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.347721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.347761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.347772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.347779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.347804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.357775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.357848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.357888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.357900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.357907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.357931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.367800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.367867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.367888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.367896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.367903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.367921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.377862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.377945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.377965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.377973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.377981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.377998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.387755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.387819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.387842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.387851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.387858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.387879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.397912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.397979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.814 [2024-10-11 12:06:35.398000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.814 [2024-10-11 12:06:35.398010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.814 [2024-10-11 12:06:35.398019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.814 [2024-10-11 12:06:35.398038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.814 qpair failed and we were unable to recover it. 00:29:32.814 [2024-10-11 12:06:35.407811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.814 [2024-10-11 12:06:35.407882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.407901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.407910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.407919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.407937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.417978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.418049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.418077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.418092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.418100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.418118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.427981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.428043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.428067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.428076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.428083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.428100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.437970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.438035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.438054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.438067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.438075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.438092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.448054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.448133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.448151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.448159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.448167] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.448183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.457979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.458055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.458081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.458091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.458098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.458114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.468131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.468192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.468211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.468219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.468226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.468242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.478128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.478194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.478212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.478221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.478227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.478243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.488044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.488121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.488140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.488148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.488155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.488172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.498231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.498298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.498316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.498325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.498333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.498349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:32.815 [2024-10-11 12:06:35.508199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.815 [2024-10-11 12:06:35.508257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.815 [2024-10-11 12:06:35.508275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.815 [2024-10-11 12:06:35.508295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.815 [2024-10-11 12:06:35.508302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:32.815 [2024-10-11 12:06:35.508319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.815 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.518247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.518311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.518330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.518338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.518345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.518361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.528281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.528345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.528363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.528372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.528379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.528396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.538359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.538434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.538452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.538461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.538468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.538484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.548330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.548390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.548408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.548416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.548423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.548442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.558423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.558534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.558554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.558562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.558569] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.558585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.568429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.568497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.568515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.568524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.568531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.568548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.578357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.578435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.578454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.578463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.578470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.578486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.588489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.588550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.588568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.588576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.588583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.588600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.598506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.598625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.598644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.598659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.598666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.598682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.608598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.608663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.608681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.608690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.608698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.608715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.618605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.618683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.618702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.618711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.618718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.618735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.628610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.628707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.628726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.628735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.628742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.628758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.638612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.079 [2024-10-11 12:06:35.638679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.079 [2024-10-11 12:06:35.638698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.079 [2024-10-11 12:06:35.638706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.079 [2024-10-11 12:06:35.638713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.079 [2024-10-11 12:06:35.638729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.079 qpair failed and we were unable to recover it. 00:29:33.079 [2024-10-11 12:06:35.648680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.648746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.648764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.648772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.648780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.648796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.658741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.658824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.658865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.658876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.658884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.658910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.668737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.668807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.668828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.668836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.668843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.668863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.678651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.678719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.678739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.678747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.678755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.678772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.688830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.688899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.688918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.688933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.688940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.688957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.698852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.698927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.698946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.698954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.698961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.698978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.708891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.708968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.708987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.708995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.709002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.709018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.718901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.718974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.719000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.719012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.719021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.719040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.728838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.728914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.728936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.728945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.728952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.728972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.738875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.738971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.738994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.739007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.739015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.739033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.748959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.749034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.749053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.749066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.749074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.749090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.758968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.759028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.759047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.759055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.759069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.759086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.768919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.768980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.768997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.769005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.769012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.769027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.080 qpair failed and we were unable to recover it. 00:29:33.080 [2024-10-11 12:06:35.779112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.080 [2024-10-11 12:06:35.779173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.080 [2024-10-11 12:06:35.779190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.080 [2024-10-11 12:06:35.779202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.080 [2024-10-11 12:06:35.779209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.080 [2024-10-11 12:06:35.779225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.081 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.789072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.789128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.789144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.789152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.789159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.789174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.799109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.799168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.799185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.799194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.799200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.799216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.809155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.809216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.809231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.809239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.809246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.809260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.819133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.819190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.819205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.819213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.819220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.819234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.829137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.829224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.829241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.829249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.829256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.829271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.839215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.839307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.839322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.839330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.839337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.839351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.849231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.849288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.849303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.849310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.849317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.849331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.859262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.859343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.859357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.859365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.859372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.859386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.869180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.869236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.869251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.869262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.869269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.869283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.879312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.879367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.879382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.879390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.879397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.879411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.889325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.889387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.889401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.889408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.355 [2024-10-11 12:06:35.889415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.355 [2024-10-11 12:06:35.889429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.355 qpair failed and we were unable to recover it. 00:29:33.355 [2024-10-11 12:06:35.899327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.355 [2024-10-11 12:06:35.899380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.355 [2024-10-11 12:06:35.899394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.355 [2024-10-11 12:06:35.899402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.899409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.899423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.909245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.909295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.909310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.909318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.909325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.909340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.919434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.919487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.919502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.919510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.919516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.919530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.929443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.929498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.929512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.929520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.929527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.929540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.939471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.939521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.939535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.939542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.939549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.939562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.949453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.949503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.949516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.949524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.949530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.949544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.959530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.959580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.959597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.959606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.959613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.959627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.969569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.969674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.969690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.969697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.969705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.969721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.979552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.979600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.979615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.979622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.979629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.979642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.989579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.989632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.989646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.989654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.989660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.989674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:35.999634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:35.999689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:35.999702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:35.999710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:35.999716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:35.999729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:36.009660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:36.009718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:36.009732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:36.009740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:36.009746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:36.009760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:36.019622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:36.019673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:36.019687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:36.019695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:36.019701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:36.019715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:36.029671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.356 [2024-10-11 12:06:36.029764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.356 [2024-10-11 12:06:36.029778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.356 [2024-10-11 12:06:36.029786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.356 [2024-10-11 12:06:36.029792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.356 [2024-10-11 12:06:36.029806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.356 qpair failed and we were unable to recover it. 00:29:33.356 [2024-10-11 12:06:36.039622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.357 [2024-10-11 12:06:36.039678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.357 [2024-10-11 12:06:36.039691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.357 [2024-10-11 12:06:36.039699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.357 [2024-10-11 12:06:36.039705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.357 [2024-10-11 12:06:36.039718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.357 qpair failed and we were unable to recover it. 00:29:33.357 [2024-10-11 12:06:36.049687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.357 [2024-10-11 12:06:36.049749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.357 [2024-10-11 12:06:36.049768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.357 [2024-10-11 12:06:36.049776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.357 [2024-10-11 12:06:36.049783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.357 [2024-10-11 12:06:36.049797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.357 qpair failed and we were unable to recover it. 00:29:33.618 [2024-10-11 12:06:36.059790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.618 [2024-10-11 12:06:36.059851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.618 [2024-10-11 12:06:36.059877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.618 [2024-10-11 12:06:36.059886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.618 [2024-10-11 12:06:36.059893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.618 [2024-10-11 12:06:36.059912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-10-11 12:06:36.069812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.618 [2024-10-11 12:06:36.069863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.618 [2024-10-11 12:06:36.069878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.618 [2024-10-11 12:06:36.069885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.618 [2024-10-11 12:06:36.069892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.618 [2024-10-11 12:06:36.069907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-10-11 12:06:36.079862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.618 [2024-10-11 12:06:36.079963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.618 [2024-10-11 12:06:36.079978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.618 [2024-10-11 12:06:36.079986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.618 [2024-10-11 12:06:36.079993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.618 [2024-10-11 12:06:36.080007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-10-11 12:06:36.089904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.618 [2024-10-11 12:06:36.089959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.618 [2024-10-11 12:06:36.089972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.618 [2024-10-11 12:06:36.089980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.618 [2024-10-11 12:06:36.089986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.618 [2024-10-11 12:06:36.090000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-10-11 12:06:36.099774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.618 [2024-10-11 12:06:36.099822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.618 [2024-10-11 12:06:36.099835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.618 [2024-10-11 12:06:36.099844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.618 [2024-10-11 12:06:36.099851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.618 [2024-10-11 12:06:36.099865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-10-11 12:06:36.109928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.618 [2024-10-11 12:06:36.110006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.618 [2024-10-11 12:06:36.110019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.618 [2024-10-11 12:06:36.110027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.618 [2024-10-11 12:06:36.110034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.618 [2024-10-11 12:06:36.110047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-10-11 12:06:36.119994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.618 [2024-10-11 12:06:36.120088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.618 [2024-10-11 12:06:36.120103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.618 [2024-10-11 12:06:36.120110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.618 [2024-10-11 12:06:36.120117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.618 [2024-10-11 12:06:36.120132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.618 qpair failed and we were unable to recover it. 00:29:33.618 [2024-10-11 12:06:36.130009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.618 [2024-10-11 12:06:36.130069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.618 [2024-10-11 12:06:36.130083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.618 [2024-10-11 12:06:36.130091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.618 [2024-10-11 12:06:36.130097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.130111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.139914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.139975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.139993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.140001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.140008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.140022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.150023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.150074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.150089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.150097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.150103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.150118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.160055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.160101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.160115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.160123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.160129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.160143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.170140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.170192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.170205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.170213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.170219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.170233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.180133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.180209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.180223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.180231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.180237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.180259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.190134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.190233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.190247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.190255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.190262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.190276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.200089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.200181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.200196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.200203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.200210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.200224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.210248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.210300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.210313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.210320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.210327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.210341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.220159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.220213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.220227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.220235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.220241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.220255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.230257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.230308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.230325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.230332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.230339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.230353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.240280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.240328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.240341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.240349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.240356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.240369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.250387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.250455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.250469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.250477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.250483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.250497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.260365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.260413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.260427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.260434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.260441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.260454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.270243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.619 [2024-10-11 12:06:36.270293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.619 [2024-10-11 12:06:36.270308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.619 [2024-10-11 12:06:36.270316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.619 [2024-10-11 12:06:36.270323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.619 [2024-10-11 12:06:36.270341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.619 qpair failed and we were unable to recover it. 00:29:33.619 [2024-10-11 12:06:36.280401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.620 [2024-10-11 12:06:36.280448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.620 [2024-10-11 12:06:36.280462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.620 [2024-10-11 12:06:36.280469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.620 [2024-10-11 12:06:36.280476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.620 [2024-10-11 12:06:36.280489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-10-11 12:06:36.290457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.620 [2024-10-11 12:06:36.290509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.620 [2024-10-11 12:06:36.290522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.620 [2024-10-11 12:06:36.290530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.620 [2024-10-11 12:06:36.290536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.620 [2024-10-11 12:06:36.290550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-10-11 12:06:36.300467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.620 [2024-10-11 12:06:36.300518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.620 [2024-10-11 12:06:36.300531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.620 [2024-10-11 12:06:36.300539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.620 [2024-10-11 12:06:36.300545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.620 [2024-10-11 12:06:36.300558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-10-11 12:06:36.310378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.620 [2024-10-11 12:06:36.310441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.620 [2024-10-11 12:06:36.310455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.620 [2024-10-11 12:06:36.310462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.620 [2024-10-11 12:06:36.310468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.620 [2024-10-11 12:06:36.310482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.620 [2024-10-11 12:06:36.320473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.620 [2024-10-11 12:06:36.320515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.620 [2024-10-11 12:06:36.320532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.620 [2024-10-11 12:06:36.320539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.620 [2024-10-11 12:06:36.320546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.620 [2024-10-11 12:06:36.320559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.620 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.330541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.330627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.330640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.330649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.330656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.330670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.340569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.340616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.340630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.340637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.340643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.340657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.350577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.350641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.350654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.350662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.350668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.350682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.360604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.360653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.360667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.360674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.360681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.360698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.370677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.370730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.370743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.370751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.370757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.370773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.380674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.380724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.380738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.380745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.380752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.380765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.390691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.390738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.390751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.390758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.390765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.390779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.400681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.400727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.400740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.400747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.400754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.400767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.410785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.410842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.410858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.410866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.410872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.410886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.420778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.420832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.420846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.420853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.420860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.420874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.430797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.430845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.430859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.430866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.430873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.430886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.440834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.440922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.440935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.440943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.440949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.440962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.450886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.450940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.450954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.450961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.450968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.450984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.883 [2024-10-11 12:06:36.460906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.883 [2024-10-11 12:06:36.460956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.883 [2024-10-11 12:06:36.460969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.883 [2024-10-11 12:06:36.460977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.883 [2024-10-11 12:06:36.460984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.883 [2024-10-11 12:06:36.460997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.883 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.470896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.470940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.470954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.470962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.470968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.470982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.480944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.480990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.481004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.481012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.481018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.481032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.491023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.491084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.491098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.491105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.491112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.491125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.500984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.501040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.501057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.501068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.501075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.501089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.510997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.511046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.511060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.511071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.511078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.511091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.521023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.521076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.521090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.521097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.521104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.521117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.531229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.531297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.531311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.531318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.531325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.531338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.541160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.541211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.541225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.541233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.541239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.541256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.551154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.551203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.551216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.551224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.551230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.551244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.561069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.561120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.561133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.561140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.561147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.561161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.571161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.571207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.571220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.571228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.571234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.571248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:33.884 [2024-10-11 12:06:36.581199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.884 [2024-10-11 12:06:36.581249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.884 [2024-10-11 12:06:36.581263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.884 [2024-10-11 12:06:36.581270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.884 [2024-10-11 12:06:36.581277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:33.884 [2024-10-11 12:06:36.581290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.884 qpair failed and we were unable to recover it. 00:29:34.147 [2024-10-11 12:06:36.591229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.147 [2024-10-11 12:06:36.591276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.147 [2024-10-11 12:06:36.591292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.147 [2024-10-11 12:06:36.591300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.147 [2024-10-11 12:06:36.591306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.147 [2024-10-11 12:06:36.591320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.147 qpair failed and we were unable to recover it. 00:29:34.147 [2024-10-11 12:06:36.601122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.147 [2024-10-11 12:06:36.601169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.147 [2024-10-11 12:06:36.601182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.147 [2024-10-11 12:06:36.601189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.147 [2024-10-11 12:06:36.601196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.147 [2024-10-11 12:06:36.601210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.147 qpair failed and we were unable to recover it. 00:29:34.147 [2024-10-11 12:06:36.611299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.147 [2024-10-11 12:06:36.611345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.147 [2024-10-11 12:06:36.611359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.147 [2024-10-11 12:06:36.611366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.147 [2024-10-11 12:06:36.611373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.147 [2024-10-11 12:06:36.611386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.147 qpair failed and we were unable to recover it. 00:29:34.147 [2024-10-11 12:06:36.621297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.147 [2024-10-11 12:06:36.621371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.147 [2024-10-11 12:06:36.621385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.147 [2024-10-11 12:06:36.621392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.147 [2024-10-11 12:06:36.621399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.147 [2024-10-11 12:06:36.621413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.147 qpair failed and we were unable to recover it. 00:29:34.147 [2024-10-11 12:06:36.631300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.147 [2024-10-11 12:06:36.631347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.147 [2024-10-11 12:06:36.631360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.147 [2024-10-11 12:06:36.631368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.147 [2024-10-11 12:06:36.631377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.147 [2024-10-11 12:06:36.631391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.147 qpair failed and we were unable to recover it. 00:29:34.147 [2024-10-11 12:06:36.641232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.147 [2024-10-11 12:06:36.641279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.147 [2024-10-11 12:06:36.641293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.147 [2024-10-11 12:06:36.641301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.147 [2024-10-11 12:06:36.641307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.147 [2024-10-11 12:06:36.641321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.147 qpair failed and we were unable to recover it. 00:29:34.147 [2024-10-11 12:06:36.651399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.147 [2024-10-11 12:06:36.651444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.651458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.651465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.651472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.651485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.661437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.661487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.661501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.661508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.661515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.661529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.671430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.671470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.671484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.671491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.671498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.671511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.681460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.681509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.681522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.681530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.681536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.681550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.691506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.691552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.691565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.691572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.691579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.691592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.701534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.701582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.701596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.701604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.701611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.701624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.711559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.711604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.711619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.711626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.711632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.711646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.721566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.721612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.721625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.721633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.721643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.721657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.731611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.731657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.731671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.731678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.731685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.731698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.741636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.741684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.741699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.741707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.741714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.741728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.751669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.751721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.751735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.751742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.751748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.751761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.761679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.761727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.761741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.761749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.761756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.761770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.771731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.771782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.771796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.771805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.771812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.771825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.781745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.781793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.781806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.781814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.148 [2024-10-11 12:06:36.781821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.148 [2024-10-11 12:06:36.781834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.148 qpair failed and we were unable to recover it. 00:29:34.148 [2024-10-11 12:06:36.791742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.148 [2024-10-11 12:06:36.791794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.148 [2024-10-11 12:06:36.791820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.148 [2024-10-11 12:06:36.791829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-10-11 12:06:36.791836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.149 [2024-10-11 12:06:36.791855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-10-11 12:06:36.801786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-10-11 12:06:36.801837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-10-11 12:06:36.801863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-10-11 12:06:36.801872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-10-11 12:06:36.801879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.149 [2024-10-11 12:06:36.801899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-10-11 12:06:36.811796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-10-11 12:06:36.811853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-10-11 12:06:36.811879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-10-11 12:06:36.811889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-10-11 12:06:36.811902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.149 [2024-10-11 12:06:36.811922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-10-11 12:06:36.821846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-10-11 12:06:36.821898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-10-11 12:06:36.821914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-10-11 12:06:36.821922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-10-11 12:06:36.821929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.149 [2024-10-11 12:06:36.821944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-10-11 12:06:36.831839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-10-11 12:06:36.831881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-10-11 12:06:36.831897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-10-11 12:06:36.831904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-10-11 12:06:36.831912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.149 [2024-10-11 12:06:36.831927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.149 [2024-10-11 12:06:36.841893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.149 [2024-10-11 12:06:36.841972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.149 [2024-10-11 12:06:36.841986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.149 [2024-10-11 12:06:36.841994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.149 [2024-10-11 12:06:36.842002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.149 [2024-10-11 12:06:36.842015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.149 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.851920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.851973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.851987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.851995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-10-11 12:06:36.852001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.412 [2024-10-11 12:06:36.852015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.861968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.862018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.862032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.862040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-10-11 12:06:36.862047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.412 [2024-10-11 12:06:36.862060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.871929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.871976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.871989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.871997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-10-11 12:06:36.872004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.412 [2024-10-11 12:06:36.872017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.882054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.882099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.882113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.882120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-10-11 12:06:36.882127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.412 [2024-10-11 12:06:36.882141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.892041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.892088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.892102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.892110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-10-11 12:06:36.892117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.412 [2024-10-11 12:06:36.892130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.902087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.902135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.902149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.902156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-10-11 12:06:36.902166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.412 [2024-10-11 12:06:36.902180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.912082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.912156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.912170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.912178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-10-11 12:06:36.912185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.412 [2024-10-11 12:06:36.912199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.922161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.922206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.922220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.922227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.412 [2024-10-11 12:06:36.922234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.412 [2024-10-11 12:06:36.922248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.412 qpair failed and we were unable to recover it. 00:29:34.412 [2024-10-11 12:06:36.932143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.412 [2024-10-11 12:06:36.932191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.412 [2024-10-11 12:06:36.932204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.412 [2024-10-11 12:06:36.932211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:36.932218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:36.932231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:36.942200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:36.942258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:36.942272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:36.942279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:36.942286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:36.942300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:36.952194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:36.952312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:36.952326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:36.952335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:36.952341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:36.952355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:36.962096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:36.962186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:36.962200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:36.962208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:36.962214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:36.962228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:36.972267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:36.972317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:36.972330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:36.972338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:36.972345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:36.972358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:36.982291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:36.982338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:36.982352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:36.982359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:36.982366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:36.982380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:36.992314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:36.992355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:36.992368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:36.992375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:36.992386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:36.992399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:37.002347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:37.002392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:37.002408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:37.002416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:37.002423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:37.002440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:37.012382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:37.012429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:37.012444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:37.012452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:37.012458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:37.012472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:37.022279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:37.022341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:37.022355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:37.022362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:37.022370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:37.022383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:37.032432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:37.032477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:37.032491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:37.032498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:37.032505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:37.032518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:37.042453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:37.042502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:37.042517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:37.042524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:37.042530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:37.042544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:37.052486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:37.052530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:37.052546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:37.052553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:37.052560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:37.052574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:37.062519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:37.062570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:37.062583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:37.062591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:37.062598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:37.062611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.413 [2024-10-11 12:06:37.072547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.413 [2024-10-11 12:06:37.072593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.413 [2024-10-11 12:06:37.072607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.413 [2024-10-11 12:06:37.072614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.413 [2024-10-11 12:06:37.072620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.413 [2024-10-11 12:06:37.072633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.413 qpair failed and we were unable to recover it. 00:29:34.414 [2024-10-11 12:06:37.082557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.414 [2024-10-11 12:06:37.082610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.414 [2024-10-11 12:06:37.082623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.414 [2024-10-11 12:06:37.082634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.414 [2024-10-11 12:06:37.082640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.414 [2024-10-11 12:06:37.082654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.414 qpair failed and we were unable to recover it. 00:29:34.414 [2024-10-11 12:06:37.092464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.414 [2024-10-11 12:06:37.092508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.414 [2024-10-11 12:06:37.092521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.414 [2024-10-11 12:06:37.092528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.414 [2024-10-11 12:06:37.092535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.414 [2024-10-11 12:06:37.092548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.414 qpair failed and we were unable to recover it. 00:29:34.414 [2024-10-11 12:06:37.102599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.414 [2024-10-11 12:06:37.102645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.414 [2024-10-11 12:06:37.102659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.414 [2024-10-11 12:06:37.102666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.414 [2024-10-11 12:06:37.102672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.414 [2024-10-11 12:06:37.102686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.414 qpair failed and we were unable to recover it. 00:29:34.414 [2024-10-11 12:06:37.112651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.414 [2024-10-11 12:06:37.112699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.414 [2024-10-11 12:06:37.112713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.414 [2024-10-11 12:06:37.112720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.414 [2024-10-11 12:06:37.112726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.414 [2024-10-11 12:06:37.112740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.414 qpair failed and we were unable to recover it. 00:29:34.676 [2024-10-11 12:06:37.122667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.676 [2024-10-11 12:06:37.122719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.676 [2024-10-11 12:06:37.122733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.676 [2024-10-11 12:06:37.122740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.676 [2024-10-11 12:06:37.122747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.676 [2024-10-11 12:06:37.122761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.676 qpair failed and we were unable to recover it. 00:29:34.676 [2024-10-11 12:06:37.132676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.676 [2024-10-11 12:06:37.132725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.132739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.132746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.132753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.132766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.142732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.142802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.142816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.142823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.142829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.142843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.152756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.152801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.152815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.152822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.152829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.152842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.162652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.162694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.162708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.162716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.162723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.162737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.172825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.172870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.172884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.172894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.172901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.172915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.182722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.182774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.182788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.182796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.182803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.182816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.192860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.192911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.192925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.192932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.192939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.192953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.202880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.202928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.202942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.202949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.202956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.202970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.212897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.212946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.212960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.212968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.212974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.212988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.222961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.223005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.223019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.223026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.223033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.223046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.232939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.232983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.232996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.233004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.233011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.233024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.242994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.243036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.243050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.243057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.243067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.243082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.252899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.252944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.252958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.252965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.252972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.252985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.263055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.263108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.263122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.677 [2024-10-11 12:06:37.263133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.677 [2024-10-11 12:06:37.263139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.677 [2024-10-11 12:06:37.263154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.677 qpair failed and we were unable to recover it. 00:29:34.677 [2024-10-11 12:06:37.273076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.677 [2024-10-11 12:06:37.273170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.677 [2024-10-11 12:06:37.273184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.273191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.273198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.273211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.282984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.283043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.283069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.283078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.283085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.283104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.293113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.293172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.293188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.293196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.293202] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.293217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.303077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.303132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.303146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.303153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.303160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.303174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.313166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.313208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.313222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.313230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.313236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.313250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.323202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.323248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.323262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.323270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.323277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.323291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.333114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.333163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.333177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.333185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.333192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.333205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.343317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.343362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.343376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.343383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.343390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.343403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.353342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.353385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.353399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.353410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.353417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.353430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.363304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.363352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.363365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.363373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.363380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.363393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.678 [2024-10-11 12:06:37.373351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.678 [2024-10-11 12:06:37.373432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.678 [2024-10-11 12:06:37.373446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.678 [2024-10-11 12:06:37.373453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.678 [2024-10-11 12:06:37.373461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.678 [2024-10-11 12:06:37.373474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.678 qpair failed and we were unable to recover it. 00:29:34.940 [2024-10-11 12:06:37.383389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.940 [2024-10-11 12:06:37.383435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.940 [2024-10-11 12:06:37.383449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.940 [2024-10-11 12:06:37.383456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.940 [2024-10-11 12:06:37.383463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.940 [2024-10-11 12:06:37.383476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.940 qpair failed and we were unable to recover it. 00:29:34.940 [2024-10-11 12:06:37.393389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.393435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.393449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.393456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.393462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.393476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.403404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.403450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.403464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.403471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.403477] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.403491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.413468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.413514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.413527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.413535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.413541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.413554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.423455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.423506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.423521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.423528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.423535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.423549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.433501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.433545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.433559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.433566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.433573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.433586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.443539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.443603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.443616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.443627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.443634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.443647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.453570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.453620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.453633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.453640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.453647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.453660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.463642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.463697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.463711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.463718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.463725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.463738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.473636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.473681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.473695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.473702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.473709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.473722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.483641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.483686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.483700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.483707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.483713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.483727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.493680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.493727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.493741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.493748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.493755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.493768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.503707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.503759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.503773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.503780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.503787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.503801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.513713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.513772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.513790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.513798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.513810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.513825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.523764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.941 [2024-10-11 12:06:37.523818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.941 [2024-10-11 12:06:37.523834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.941 [2024-10-11 12:06:37.523841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.941 [2024-10-11 12:06:37.523848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.941 [2024-10-11 12:06:37.523861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.941 qpair failed and we were unable to recover it. 00:29:34.941 [2024-10-11 12:06:37.533799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.533848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.533862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.533874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.533881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.533895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.543817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.543886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.543913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.543926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.543934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.543953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.553698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.553752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.553778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.553787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.553794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.553813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.563846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.563900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.563926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.563936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.563943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.563962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.573886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.573934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.573951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.573959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.573965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.573981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.583926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.583981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.583996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.584005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.584012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.584026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.593911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.593956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.593970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.593978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.593984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.593998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.603962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.604004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.604017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.604025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.604032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.604046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.613873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.613924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.613937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.613945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.613951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.613965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.624054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.624106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.624123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.624131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.624137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.624151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:34.942 [2024-10-11 12:06:37.634049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.942 [2024-10-11 12:06:37.634097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.942 [2024-10-11 12:06:37.634112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.942 [2024-10-11 12:06:37.634119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.942 [2024-10-11 12:06:37.634126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:34.942 [2024-10-11 12:06:37.634139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.942 qpair failed and we were unable to recover it. 00:29:35.204 [2024-10-11 12:06:37.644079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.204 [2024-10-11 12:06:37.644127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.204 [2024-10-11 12:06:37.644141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.204 [2024-10-11 12:06:37.644148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.204 [2024-10-11 12:06:37.644155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.204 [2024-10-11 12:06:37.644169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.204 qpair failed and we were unable to recover it. 00:29:35.204 [2024-10-11 12:06:37.653972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.204 [2024-10-11 12:06:37.654023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.204 [2024-10-11 12:06:37.654037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.204 [2024-10-11 12:06:37.654045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.204 [2024-10-11 12:06:37.654051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.204 [2024-10-11 12:06:37.654069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.204 qpair failed and we were unable to recover it. 00:29:35.204 [2024-10-11 12:06:37.664023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.204 [2024-10-11 12:06:37.664077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.204 [2024-10-11 12:06:37.664091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.204 [2024-10-11 12:06:37.664098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.204 [2024-10-11 12:06:37.664105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.204 [2024-10-11 12:06:37.664119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.204 qpair failed and we were unable to recover it. 00:29:35.204 [2024-10-11 12:06:37.674145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.204 [2024-10-11 12:06:37.674200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.204 [2024-10-11 12:06:37.674216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.204 [2024-10-11 12:06:37.674224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.204 [2024-10-11 12:06:37.674231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.204 [2024-10-11 12:06:37.674248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.204 qpair failed and we were unable to recover it. 00:29:35.204 [2024-10-11 12:06:37.684165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.204 [2024-10-11 12:06:37.684220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.684234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.684242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.684248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.684262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.694198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.694283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.694297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.694304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.694311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.694325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.704101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.704148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.704161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.704169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.704175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.704189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.714239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.714285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.714305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.714312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.714318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.714332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.724150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.724193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.724208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.724215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.724221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.724235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.734302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.734350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.734364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.734371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.734378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.734391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.744350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.744402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.744417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.744425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.744431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.744445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.754369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.754416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.754430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.754437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.754444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.754460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.764252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.764301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.764315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.764322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.764329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.764342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.774426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.774474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.774488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.774495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.774501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.774515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.784323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.784372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.784385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.784392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.784399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.784412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.794470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.794517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.794531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.794538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.794544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.794558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.804373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.804421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.804439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.804446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.804453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.804468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.814571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.814650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.814664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.205 [2024-10-11 12:06:37.814671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.205 [2024-10-11 12:06:37.814677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.205 [2024-10-11 12:06:37.814690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.205 qpair failed and we were unable to recover it. 00:29:35.205 [2024-10-11 12:06:37.824575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.205 [2024-10-11 12:06:37.824627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.205 [2024-10-11 12:06:37.824640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.824647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.824654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.824668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.206 [2024-10-11 12:06:37.834452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.206 [2024-10-11 12:06:37.834494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.206 [2024-10-11 12:06:37.834509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.834516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.834523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.834536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.206 [2024-10-11 12:06:37.844651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.206 [2024-10-11 12:06:37.844692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.206 [2024-10-11 12:06:37.844705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.844713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.844719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.844736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.206 [2024-10-11 12:06:37.854650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.206 [2024-10-11 12:06:37.854721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.206 [2024-10-11 12:06:37.854735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.854742] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.854749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.854762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.206 [2024-10-11 12:06:37.864683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.206 [2024-10-11 12:06:37.864734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.206 [2024-10-11 12:06:37.864749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.864756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.864763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.864776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.206 [2024-10-11 12:06:37.874697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.206 [2024-10-11 12:06:37.874768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.206 [2024-10-11 12:06:37.874782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.874789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.874796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.874809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.206 [2024-10-11 12:06:37.884747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.206 [2024-10-11 12:06:37.884840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.206 [2024-10-11 12:06:37.884854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.884861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.884868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.884881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.206 [2024-10-11 12:06:37.894757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.206 [2024-10-11 12:06:37.894808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.206 [2024-10-11 12:06:37.894825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.894832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.894839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.894852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.206 [2024-10-11 12:06:37.904811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.206 [2024-10-11 12:06:37.904857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.206 [2024-10-11 12:06:37.904870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.206 [2024-10-11 12:06:37.904877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.206 [2024-10-11 12:06:37.904884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.206 [2024-10-11 12:06:37.904897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.206 qpair failed and we were unable to recover it. 00:29:35.468 [2024-10-11 12:06:37.914798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.468 [2024-10-11 12:06:37.914860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.468 [2024-10-11 12:06:37.914873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.468 [2024-10-11 12:06:37.914881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.468 [2024-10-11 12:06:37.914887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.468 [2024-10-11 12:06:37.914901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.468 qpair failed and we were unable to recover it. 00:29:35.468 [2024-10-11 12:06:37.924828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.468 [2024-10-11 12:06:37.924873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.468 [2024-10-11 12:06:37.924887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.468 [2024-10-11 12:06:37.924895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.468 [2024-10-11 12:06:37.924901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.468 [2024-10-11 12:06:37.924915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.468 qpair failed and we were unable to recover it. 00:29:35.468 [2024-10-11 12:06:37.934861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.468 [2024-10-11 12:06:37.934907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.468 [2024-10-11 12:06:37.934921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.468 [2024-10-11 12:06:37.934928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.468 [2024-10-11 12:06:37.934935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.468 [2024-10-11 12:06:37.934951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.468 qpair failed and we were unable to recover it. 00:29:35.468 [2024-10-11 12:06:37.944901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:37.944952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:37.944966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:37.944973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:37.944979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:37.944993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:37.954915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:37.954957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:37.954971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:37.954978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:37.954985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:37.954998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:37.964935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:37.964997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:37.965011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:37.965019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:37.965025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:37.965039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:37.974969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:37.975015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:37.975028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:37.975035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:37.975042] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:37.975056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:37.985008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:37.985059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:37.985080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:37.985087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:37.985094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:37.985107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:37.995027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:37.995074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:37.995088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:37.995096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:37.995103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:37.995117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.005048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.005096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:38.005110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:38.005117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:38.005124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:38.005137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.015078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.015125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:38.015139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:38.015146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:38.015152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:38.015166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.025123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.025169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:38.025183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:38.025191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:38.025197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:38.025214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.035120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.035172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:38.035185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:38.035192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:38.035199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:38.035213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.045128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.045177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:38.045193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:38.045200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:38.045207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:38.045221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.055157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.055209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:38.055223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:38.055230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:38.055237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:38.055251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.065225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.065277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:38.065291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:38.065298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:38.065305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:38.065319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.075235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.075279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.469 [2024-10-11 12:06:38.075296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.469 [2024-10-11 12:06:38.075304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.469 [2024-10-11 12:06:38.075310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.469 [2024-10-11 12:06:38.075324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.469 qpair failed and we were unable to recover it. 00:29:35.469 [2024-10-11 12:06:38.085289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.469 [2024-10-11 12:06:38.085379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.085393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.085401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.085408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.085422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.470 [2024-10-11 12:06:38.095301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.470 [2024-10-11 12:06:38.095349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.095362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.095370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.095376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.095389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.470 [2024-10-11 12:06:38.105335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.470 [2024-10-11 12:06:38.105384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.105398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.105405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.105411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.105425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.470 [2024-10-11 12:06:38.115347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.470 [2024-10-11 12:06:38.115396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.115410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.115417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.115423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.115440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.470 [2024-10-11 12:06:38.125371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.470 [2024-10-11 12:06:38.125412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.125426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.125434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.125440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.125453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.470 [2024-10-11 12:06:38.135400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.470 [2024-10-11 12:06:38.135492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.135505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.135513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.135520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.135533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.470 [2024-10-11 12:06:38.145442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.470 [2024-10-11 12:06:38.145495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.145508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.145516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.145522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.145535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.470 [2024-10-11 12:06:38.155375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.470 [2024-10-11 12:06:38.155433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.155446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.155453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.155460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.155473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.470 [2024-10-11 12:06:38.165479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.470 [2024-10-11 12:06:38.165522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.470 [2024-10-11 12:06:38.165538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.470 [2024-10-11 12:06:38.165546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.470 [2024-10-11 12:06:38.165552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.470 [2024-10-11 12:06:38.165565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.470 qpair failed and we were unable to recover it. 00:29:35.731 [2024-10-11 12:06:38.175513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.731 [2024-10-11 12:06:38.175563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.175577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.175584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.175591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.175604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.185537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.185588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.185602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.185609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.185615] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.185628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.195431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.195480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.195493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.195501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.195507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.195520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.205584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.205626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.205640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.205647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.205657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.205670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.215616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.215659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.215673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.215680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.215687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.215700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.225700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.225752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.225766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.225773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.225780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.225794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.235646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.235734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.235749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.235756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.235763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.235777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.245663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.245704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.245718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.245726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.245732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.245746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.255711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.255774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.255790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.255797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.255804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.255817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.265763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.265847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.265861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.265868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.265874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.265888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.275652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.275698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.275712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.275719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.275725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.275739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.285801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.285848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.285861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.285869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.285875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.285889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.295853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.295907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.732 [2024-10-11 12:06:38.295932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.732 [2024-10-11 12:06:38.295941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.732 [2024-10-11 12:06:38.295953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.732 [2024-10-11 12:06:38.295972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.732 qpair failed and we were unable to recover it. 00:29:35.732 [2024-10-11 12:06:38.305876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.732 [2024-10-11 12:06:38.305977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.305993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.306000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.306007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.306022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.315867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.315929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.315943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.315950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.315957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.315971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.325773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.325819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.325833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.325841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.325848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.325862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.335810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.335858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.335872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.335879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.335886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.335899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.345979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.346030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.346044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.346052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.346058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.346076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.355978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.356023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.356037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.356045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.356051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.356068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.366006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.366051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.366069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.366077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.366083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.366097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.375909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.376007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.376020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.376028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.376035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.376048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.386081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.386135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.386148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.386155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.386166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.386180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.396084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.396182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.396196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.396203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.396210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.396224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.406000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.406045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.406058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.406071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.406077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.406091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.416012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.416065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.416079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.416086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.416093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.416106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.733 [2024-10-11 12:06:38.426052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.733 [2024-10-11 12:06:38.426138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.733 [2024-10-11 12:06:38.426155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.733 [2024-10-11 12:06:38.426163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.733 [2024-10-11 12:06:38.426170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.733 [2024-10-11 12:06:38.426184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.733 qpair failed and we were unable to recover it. 00:29:35.994 [2024-10-11 12:06:38.436204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.436254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.436269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.436276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.436283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.436296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.446218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.446264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.446278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.446285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.446292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.446305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.456245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.456290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.456303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.456311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.456317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.456331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.466246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.466297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.466310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.466318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.466325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.466338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.476317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.476394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.476407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.476414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.476429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.476443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.486347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.486396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.486409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.486416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.486423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.486437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.496351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.496409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.496422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.496430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.496437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.496450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.506388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.506452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.506465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.506473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.506479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.506493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.516369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.516413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.516427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.516434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.516441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.516454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.526417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.526470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.526487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.526494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.526504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.526519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.536452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.536500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.536514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.536522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.536528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.536542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.546488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.546545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.546559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.546566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.546573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.546586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.556505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.556554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.556568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.556575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.556581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.556595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.566541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.566588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.995 [2024-10-11 12:06:38.566601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.995 [2024-10-11 12:06:38.566609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.995 [2024-10-11 12:06:38.566619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.995 [2024-10-11 12:06:38.566632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.995 qpair failed and we were unable to recover it. 00:29:35.995 [2024-10-11 12:06:38.576567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.995 [2024-10-11 12:06:38.576619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.576633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.576640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.576646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.576659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.586548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.586593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.586608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.586615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.586622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.586635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.596486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.596529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.596543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.596552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.596560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.596574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.606613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.606653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.606667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.606674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.606680] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.606694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.616677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.616734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.616748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.616755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.616762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.616776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.626709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.626760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.626774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.626781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.626788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.626801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.636688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.636730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.636744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.636752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.636758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.636772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.646755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.646805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.646819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.646827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.646834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.646847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.656777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.656821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.656835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.656843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.656854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.656867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.666811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.666859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.666873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.666881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.666888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.666901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.676691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.676733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.676747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.676754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.676761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.676774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.686860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.686907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.686921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.686928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.686935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.686948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:35.996 [2024-10-11 12:06:38.696759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.996 [2024-10-11 12:06:38.696809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.996 [2024-10-11 12:06:38.696824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.996 [2024-10-11 12:06:38.696832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.996 [2024-10-11 12:06:38.696838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:35.996 [2024-10-11 12:06:38.696853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.996 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.706787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.706840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.706855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.706862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.706869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.706882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.716816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.716876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.716890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.716897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.716903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.716917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.726948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.727006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.727020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.727028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.727034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.727048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.736856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.736904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.736918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.736926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.736932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.736946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.747025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.747081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.747095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.747106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.747113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.747127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.757027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.757082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.757096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.757103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.757110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.757123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.767053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.767130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.767146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.767154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.767161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.767175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.776999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.777047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.777060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.777071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.777078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.777092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.787136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.787186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.787200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.787207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.787214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.787228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.797160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.797209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.797223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.797231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.797237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.797251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.807168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.807220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.807235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.807242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.807249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.807262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.817206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.817255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.817268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.817275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.817282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.817297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.827224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.827271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.827284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.827292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.259 [2024-10-11 12:06:38.827298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.259 [2024-10-11 12:06:38.827312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.259 qpair failed and we were unable to recover it. 00:29:36.259 [2024-10-11 12:06:38.837273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.259 [2024-10-11 12:06:38.837314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.259 [2024-10-11 12:06:38.837328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.259 [2024-10-11 12:06:38.837339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.837346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.837359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.847251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.847351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.847366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.847373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.847380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.847394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.857317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.857436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.857452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.857459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.857466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.857479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.867352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.867404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.867418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.867425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.867432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.867445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.877418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.877473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.877486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.877493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.877500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.877514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.887382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.887423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.887437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.887444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.887451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.887465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.897301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.897358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.897375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.897383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.897389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.897404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.907515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.907563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.907577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.907585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.907592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.907606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.917475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.917540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.917554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.917562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.917568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.917583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.927376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.927426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.927441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.927451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.927458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.927472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.937536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.937582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.937596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.937603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.937610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.937623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.947579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.947631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.947645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.947652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.947659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.947673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.260 [2024-10-11 12:06:38.957587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.260 [2024-10-11 12:06:38.957635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.260 [2024-10-11 12:06:38.957649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.260 [2024-10-11 12:06:38.957656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.260 [2024-10-11 12:06:38.957663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.260 [2024-10-11 12:06:38.957676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.260 qpair failed and we were unable to recover it. 00:29:36.522 [2024-10-11 12:06:38.967588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.522 [2024-10-11 12:06:38.967632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.522 [2024-10-11 12:06:38.967646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.522 [2024-10-11 12:06:38.967653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.522 [2024-10-11 12:06:38.967660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.522 [2024-10-11 12:06:38.967673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.522 qpair failed and we were unable to recover it. 00:29:36.522 [2024-10-11 12:06:38.977662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.522 [2024-10-11 12:06:38.977758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.522 [2024-10-11 12:06:38.977771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.522 [2024-10-11 12:06:38.977779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.522 [2024-10-11 12:06:38.977786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.522 [2024-10-11 12:06:38.977799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.522 qpair failed and we were unable to recover it. 00:29:36.522 [2024-10-11 12:06:38.987686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.522 [2024-10-11 12:06:38.987733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.522 [2024-10-11 12:06:38.987747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.522 [2024-10-11 12:06:38.987754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.522 [2024-10-11 12:06:38.987761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.522 [2024-10-11 12:06:38.987775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.522 qpair failed and we were unable to recover it. 00:29:36.522 [2024-10-11 12:06:38.997701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.522 [2024-10-11 12:06:38.997744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.522 [2024-10-11 12:06:38.997757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.522 [2024-10-11 12:06:38.997765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.522 [2024-10-11 12:06:38.997771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.522 [2024-10-11 12:06:38.997785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.522 qpair failed and we were unable to recover it. 00:29:36.522 [2024-10-11 12:06:39.007724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.522 [2024-10-11 12:06:39.007779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.522 [2024-10-11 12:06:39.007792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.522 [2024-10-11 12:06:39.007800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.522 [2024-10-11 12:06:39.007806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.522 [2024-10-11 12:06:39.007820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.522 qpair failed and we were unable to recover it. 00:29:36.522 [2024-10-11 12:06:39.017631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.522 [2024-10-11 12:06:39.017679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.522 [2024-10-11 12:06:39.017693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.522 [2024-10-11 12:06:39.017703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.522 [2024-10-11 12:06:39.017710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.522 [2024-10-11 12:06:39.017723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.522 qpair failed and we were unable to recover it. 00:29:36.522 [2024-10-11 12:06:39.027779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.027823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.027837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.027845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.027851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.027864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.037811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.037854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.037867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.037874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.037881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.037894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.047845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.047889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.047905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.047912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.047919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.047933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.057870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.057917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.057931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.057938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.057944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.057958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.067772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.067822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.067837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.067844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.067851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.067865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.077909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.077951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.077965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.077972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.077979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.077993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.087916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.087961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.087976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.087983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.087990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.088003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.097960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.098008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.098021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.098029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.098035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.098049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.107914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.107982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.107996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.108007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.108014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.108028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.118022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.118067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.118082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.118089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.118096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.118110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.128047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.128100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.128114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.128121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.128128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.128142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.138115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.138168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.138182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.138189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.138196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.138210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.148119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.148168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.148183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.148191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.148197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.148211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.158148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.158194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.158208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.158215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.523 [2024-10-11 12:06:39.158222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.523 [2024-10-11 12:06:39.158236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.523 qpair failed and we were unable to recover it. 00:29:36.523 [2024-10-11 12:06:39.168136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.523 [2024-10-11 12:06:39.168183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.523 [2024-10-11 12:06:39.168197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.523 [2024-10-11 12:06:39.168204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.524 [2024-10-11 12:06:39.168210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.524 [2024-10-11 12:06:39.168224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.524 qpair failed and we were unable to recover it. 00:29:36.524 [2024-10-11 12:06:39.178085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.524 [2024-10-11 12:06:39.178132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.524 [2024-10-11 12:06:39.178146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.524 [2024-10-11 12:06:39.178153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.524 [2024-10-11 12:06:39.178160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.524 [2024-10-11 12:06:39.178174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.524 qpair failed and we were unable to recover it. 00:29:36.524 [2024-10-11 12:06:39.188246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.524 [2024-10-11 12:06:39.188298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.524 [2024-10-11 12:06:39.188312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.524 [2024-10-11 12:06:39.188319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.524 [2024-10-11 12:06:39.188326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.524 [2024-10-11 12:06:39.188339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.524 qpair failed and we were unable to recover it. 00:29:36.524 [2024-10-11 12:06:39.198223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.524 [2024-10-11 12:06:39.198266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.524 [2024-10-11 12:06:39.198283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.524 [2024-10-11 12:06:39.198290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.524 [2024-10-11 12:06:39.198297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.524 [2024-10-11 12:06:39.198311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.524 qpair failed and we were unable to recover it. 00:29:36.524 [2024-10-11 12:06:39.208255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.524 [2024-10-11 12:06:39.208346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.524 [2024-10-11 12:06:39.208361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.524 [2024-10-11 12:06:39.208368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.524 [2024-10-11 12:06:39.208375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.524 [2024-10-11 12:06:39.208389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.524 qpair failed and we were unable to recover it. 00:29:36.524 [2024-10-11 12:06:39.218306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.524 [2024-10-11 12:06:39.218394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.524 [2024-10-11 12:06:39.218407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.524 [2024-10-11 12:06:39.218415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.524 [2024-10-11 12:06:39.218422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.524 [2024-10-11 12:06:39.218435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.524 qpair failed and we were unable to recover it. 00:29:36.786 [2024-10-11 12:06:39.228312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.786 [2024-10-11 12:06:39.228390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.786 [2024-10-11 12:06:39.228404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.786 [2024-10-11 12:06:39.228411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.786 [2024-10-11 12:06:39.228418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.786 [2024-10-11 12:06:39.228431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-10-11 12:06:39.238364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.786 [2024-10-11 12:06:39.238407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.786 [2024-10-11 12:06:39.238421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.786 [2024-10-11 12:06:39.238429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.786 [2024-10-11 12:06:39.238435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.786 [2024-10-11 12:06:39.238448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-10-11 12:06:39.248379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.786 [2024-10-11 12:06:39.248465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.786 [2024-10-11 12:06:39.248479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.786 [2024-10-11 12:06:39.248487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.786 [2024-10-11 12:06:39.248494] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.786 [2024-10-11 12:06:39.248508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-10-11 12:06:39.258280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.786 [2024-10-11 12:06:39.258328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.786 [2024-10-11 12:06:39.258341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.786 [2024-10-11 12:06:39.258348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.786 [2024-10-11 12:06:39.258355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.786 [2024-10-11 12:06:39.258369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-10-11 12:06:39.268442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.786 [2024-10-11 12:06:39.268489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.786 [2024-10-11 12:06:39.268502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.786 [2024-10-11 12:06:39.268510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.786 [2024-10-11 12:06:39.268516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.786 [2024-10-11 12:06:39.268529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-10-11 12:06:39.278472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.786 [2024-10-11 12:06:39.278514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.786 [2024-10-11 12:06:39.278527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.786 [2024-10-11 12:06:39.278535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.786 [2024-10-11 12:06:39.278542] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.786 [2024-10-11 12:06:39.278555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-10-11 12:06:39.288488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.786 [2024-10-11 12:06:39.288529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.786 [2024-10-11 12:06:39.288546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.288554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.288560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.288573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.298541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.298589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.298602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.298609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.298616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.298629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.308597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.308673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.308687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.308694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.308701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.308715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.318564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.318610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.318623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.318631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.318637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.318651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.328471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.328516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.328531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.328539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.328545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.328558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.338630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.338678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.338692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.338699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.338706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.338720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.348645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.348688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.348701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.348709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.348715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.348728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.358695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.358741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.358754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.358762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.358768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.358781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.368691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.368738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.368751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.368758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.368765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.368778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.378745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.378799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.378829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.378838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.378845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.378865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.388641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.388704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.388720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.388728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.388734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.388749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.398668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.398713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.398727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.398735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.398741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.398755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.408814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.408868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.408881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.408889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.408896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.408909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.418901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.418946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.418960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.418967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.787 [2024-10-11 12:06:39.418974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.787 [2024-10-11 12:06:39.418992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-10-11 12:06:39.428883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.787 [2024-10-11 12:06:39.428936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.787 [2024-10-11 12:06:39.428950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.787 [2024-10-11 12:06:39.428958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.788 [2024-10-11 12:06:39.428964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.788 [2024-10-11 12:06:39.428978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-10-11 12:06:39.438895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.788 [2024-10-11 12:06:39.438937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.788 [2024-10-11 12:06:39.438951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.788 [2024-10-11 12:06:39.438958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.788 [2024-10-11 12:06:39.438965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.788 [2024-10-11 12:06:39.438978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-10-11 12:06:39.448926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.788 [2024-10-11 12:06:39.448970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.788 [2024-10-11 12:06:39.448983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.788 [2024-10-11 12:06:39.448991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.788 [2024-10-11 12:06:39.448998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.788 [2024-10-11 12:06:39.449011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-10-11 12:06:39.458970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.788 [2024-10-11 12:06:39.459018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.788 [2024-10-11 12:06:39.459031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.788 [2024-10-11 12:06:39.459039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.788 [2024-10-11 12:06:39.459045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.788 [2024-10-11 12:06:39.459059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-10-11 12:06:39.468865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.788 [2024-10-11 12:06:39.468957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.788 [2024-10-11 12:06:39.468974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.788 [2024-10-11 12:06:39.468982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.788 [2024-10-11 12:06:39.468988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.788 [2024-10-11 12:06:39.469002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-10-11 12:06:39.479048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.788 [2024-10-11 12:06:39.479134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.788 [2024-10-11 12:06:39.479148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.788 [2024-10-11 12:06:39.479156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.788 [2024-10-11 12:06:39.479162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:36.788 [2024-10-11 12:06:39.479176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.788 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.489034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.489106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.489119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.489127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.489134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.489148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.499054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.499108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.499121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.499129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.499135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.499149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.509102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.509201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.509215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.509223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.509229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.509246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.519123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.519174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.519187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.519195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.519201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.519215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.529132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.529181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.529195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.529202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.529209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.529222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.539173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.539221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.539234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.539242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.539248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.539262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.549096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.549150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.549165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.549173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.549180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.549193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.559264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.559348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.559365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.559374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.559380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.559394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.569249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.569297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.569310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.569317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.569324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.569337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.579303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.050 [2024-10-11 12:06:39.579349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.050 [2024-10-11 12:06:39.579362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.050 [2024-10-11 12:06:39.579370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.050 [2024-10-11 12:06:39.579376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.050 [2024-10-11 12:06:39.579390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.050 qpair failed and we were unable to recover it. 00:29:37.050 [2024-10-11 12:06:39.589233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.589295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.589309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.589316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.589323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.589337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.599335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.599384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.599398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.599405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.599412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.599429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.609364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.609403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.609417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.609424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.609431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.609444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.619386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.619442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.619455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.619462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.619469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.619482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.629431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.629479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.629493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.629500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.629507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.629521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.639432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.639476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.639489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.639496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.639502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.639515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.649469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.649512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.649529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.649537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.649543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.649557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.659511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.659555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.659568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.659576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.659583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.659596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.669541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.669609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.669623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.669630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.669637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.669651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.679421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.679464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.679477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.679484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.679491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.679505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.689576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.689658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.689672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.689680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.689686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.689703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.699610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.699656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.699670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.699677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.699683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc817c0 00:29:37.051 [2024-10-11 12:06:39.699697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.709588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.709679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.709744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.709770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.709791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb02c000b90 00:29:37.051 [2024-10-11 12:06:39.709863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.719666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.051 [2024-10-11 12:06:39.719733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.051 [2024-10-11 12:06:39.719767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.051 [2024-10-11 12:06:39.719783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.051 [2024-10-11 12:06:39.719798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb02c000b90 00:29:37.051 [2024-10-11 12:06:39.719830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:37.051 qpair failed and we were unable to recover it. 00:29:37.051 [2024-10-11 12:06:39.719987] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:37.051 A controller has encountered a failure and is being reset. 00:29:37.051 Controller properly reset. 00:29:37.313 Initializing NVMe Controllers 00:29:37.313 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:37.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:37.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:37.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:37.313 Initialization complete. Launching workers. 00:29:37.313 Starting thread on core 1 00:29:37.313 Starting thread on core 2 00:29:37.313 Starting thread on core 3 00:29:37.313 Starting thread on core 0 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:37.313 00:29:37.313 real 0m11.512s 00:29:37.313 user 0m21.678s 00:29:37.313 sys 0m3.995s 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.313 ************************************ 00:29:37.313 END TEST nvmf_target_disconnect_tc2 00:29:37.313 ************************************ 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.313 rmmod nvme_tcp 00:29:37.313 rmmod nvme_fabrics 00:29:37.313 rmmod nvme_keyring 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:37.313 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 2117482 ']' 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 2117482 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2117482 ']' 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2117482 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2117482 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2117482' 00:29:37.314 killing process with pid 2117482 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2117482 00:29:37.314 12:06:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2117482 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.574 12:06:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.487 12:06:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.487 00:29:39.487 real 0m22.107s 00:29:39.487 user 0m49.884s 00:29:39.487 sys 0m10.334s 00:29:39.487 12:06:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.487 12:06:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:39.487 ************************************ 00:29:39.487 END TEST nvmf_target_disconnect 00:29:39.487 ************************************ 00:29:39.487 12:06:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:39.487 00:29:39.487 real 6m34.321s 00:29:39.487 user 11m21.345s 00:29:39.487 sys 2m18.280s 00:29:39.487 12:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.487 12:06:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.487 ************************************ 00:29:39.487 END TEST nvmf_host 00:29:39.487 ************************************ 00:29:39.748 12:06:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:39.748 12:06:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:39.748 12:06:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:39.748 12:06:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:39.748 12:06:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.748 12:06:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.748 ************************************ 00:29:39.748 START TEST nvmf_target_core_interrupt_mode 00:29:39.748 ************************************ 00:29:39.748 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:39.748 * Looking for test storage... 00:29:39.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:39.748 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:39.748 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:39.748 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:40.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.010 --rc genhtml_branch_coverage=1 00:29:40.010 --rc genhtml_function_coverage=1 00:29:40.010 --rc genhtml_legend=1 00:29:40.010 --rc geninfo_all_blocks=1 00:29:40.010 --rc geninfo_unexecuted_blocks=1 00:29:40.010 00:29:40.010 ' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:40.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.010 --rc genhtml_branch_coverage=1 00:29:40.010 --rc genhtml_function_coverage=1 00:29:40.010 --rc genhtml_legend=1 00:29:40.010 --rc geninfo_all_blocks=1 00:29:40.010 --rc geninfo_unexecuted_blocks=1 00:29:40.010 00:29:40.010 ' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:40.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.010 --rc genhtml_branch_coverage=1 00:29:40.010 --rc genhtml_function_coverage=1 00:29:40.010 --rc genhtml_legend=1 00:29:40.010 --rc geninfo_all_blocks=1 00:29:40.010 --rc geninfo_unexecuted_blocks=1 00:29:40.010 00:29:40.010 ' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:40.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.010 --rc genhtml_branch_coverage=1 00:29:40.010 --rc genhtml_function_coverage=1 00:29:40.010 --rc genhtml_legend=1 00:29:40.010 --rc geninfo_all_blocks=1 00:29:40.010 --rc geninfo_unexecuted_blocks=1 00:29:40.010 00:29:40.010 ' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.010 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:40.011 ************************************ 00:29:40.011 START TEST nvmf_abort 00:29:40.011 ************************************ 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:40.011 * Looking for test storage... 00:29:40.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:40.011 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:40.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.272 --rc genhtml_branch_coverage=1 00:29:40.272 --rc genhtml_function_coverage=1 00:29:40.272 --rc genhtml_legend=1 00:29:40.272 --rc geninfo_all_blocks=1 00:29:40.272 --rc geninfo_unexecuted_blocks=1 00:29:40.272 00:29:40.272 ' 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:40.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.272 --rc genhtml_branch_coverage=1 00:29:40.272 --rc genhtml_function_coverage=1 00:29:40.272 --rc genhtml_legend=1 00:29:40.272 --rc geninfo_all_blocks=1 00:29:40.272 --rc geninfo_unexecuted_blocks=1 00:29:40.272 00:29:40.272 ' 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:40.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.272 --rc genhtml_branch_coverage=1 00:29:40.272 --rc genhtml_function_coverage=1 00:29:40.272 --rc genhtml_legend=1 00:29:40.272 --rc geninfo_all_blocks=1 00:29:40.272 --rc geninfo_unexecuted_blocks=1 00:29:40.272 00:29:40.272 ' 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:40.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.272 --rc genhtml_branch_coverage=1 00:29:40.272 --rc genhtml_function_coverage=1 00:29:40.272 --rc genhtml_legend=1 00:29:40.272 --rc geninfo_all_blocks=1 00:29:40.272 --rc geninfo_unexecuted_blocks=1 00:29:40.272 00:29:40.272 ' 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.272 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.273 12:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:48.418 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:48.418 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:48.418 Found net devices under 0000:31:00.0: cvl_0_0 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.418 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:48.418 Found net devices under 0000:31:00.1: cvl_0_1 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:29:48.419 00:29:48.419 --- 10.0.0.2 ping statistics --- 00:29:48.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.419 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:29:48.419 00:29:48.419 --- 10.0.0.1 ping statistics --- 00:29:48.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.419 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2123225 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2123225 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2123225 ']' 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.419 12:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.419 [2024-10-11 12:06:50.579693] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:48.419 [2024-10-11 12:06:50.580801] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:29:48.419 [2024-10-11 12:06:50.580848] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.419 [2024-10-11 12:06:50.677015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:48.419 [2024-10-11 12:06:50.728383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.419 [2024-10-11 12:06:50.728431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.419 [2024-10-11 12:06:50.728440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.419 [2024-10-11 12:06:50.728447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.419 [2024-10-11 12:06:50.728453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.419 [2024-10-11 12:06:50.730290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.419 [2024-10-11 12:06:50.730332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.419 [2024-10-11 12:06:50.730334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.419 [2024-10-11 12:06:50.806428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.419 [2024-10-11 12:06:50.807148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:48.419 [2024-10-11 12:06:50.807346] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:48.419 [2024-10-11 12:06:50.807566] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.993 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.993 [2024-10-11 12:06:51.447439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.994 Malloc0 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.994 Delay0 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.994 [2024-10-11 12:06:51.555367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.994 12:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:49.256 [2024-10-11 12:06:51.727255] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:51.171 Initializing NVMe Controllers 00:29:51.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:51.171 controller IO queue size 128 less than required 00:29:51.171 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:51.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:51.171 Initialization complete. Launching workers. 00:29:51.171 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27657 00:29:51.171 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27718, failed to submit 66 00:29:51.171 success 27657, unsuccessful 61, failed 0 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.171 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.171 rmmod nvme_tcp 00:29:51.171 rmmod nvme_fabrics 00:29:51.171 rmmod nvme_keyring 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2123225 ']' 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2123225 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2123225 ']' 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2123225 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2123225 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2123225' 00:29:51.432 killing process with pid 2123225 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2123225 00:29:51.432 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2123225 00:29:51.432 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:51.432 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:51.432 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:51.432 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:51.432 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:51.432 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:51.432 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:51.693 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.693 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.693 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.693 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.693 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.608 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.608 00:29:53.608 real 0m13.660s 00:29:53.608 user 0m11.090s 00:29:53.608 sys 0m7.147s 00:29:53.608 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:53.608 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:53.608 ************************************ 00:29:53.608 END TEST nvmf_abort 00:29:53.608 ************************************ 00:29:53.608 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:53.608 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:53.608 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:53.608 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:53.608 ************************************ 00:29:53.608 START TEST nvmf_ns_hotplug_stress 00:29:53.608 ************************************ 00:29:53.608 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:53.869 * Looking for test storage... 00:29:53.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.869 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:53.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.870 --rc genhtml_branch_coverage=1 00:29:53.870 --rc genhtml_function_coverage=1 00:29:53.870 --rc genhtml_legend=1 00:29:53.870 --rc geninfo_all_blocks=1 00:29:53.870 --rc geninfo_unexecuted_blocks=1 00:29:53.870 00:29:53.870 ' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:53.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.870 --rc genhtml_branch_coverage=1 00:29:53.870 --rc genhtml_function_coverage=1 00:29:53.870 --rc genhtml_legend=1 00:29:53.870 --rc geninfo_all_blocks=1 00:29:53.870 --rc geninfo_unexecuted_blocks=1 00:29:53.870 00:29:53.870 ' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:53.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.870 --rc genhtml_branch_coverage=1 00:29:53.870 --rc genhtml_function_coverage=1 00:29:53.870 --rc genhtml_legend=1 00:29:53.870 --rc geninfo_all_blocks=1 00:29:53.870 --rc geninfo_unexecuted_blocks=1 00:29:53.870 00:29:53.870 ' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:53.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.870 --rc genhtml_branch_coverage=1 00:29:53.870 --rc genhtml_function_coverage=1 00:29:53.870 --rc genhtml_legend=1 00:29:53.870 --rc geninfo_all_blocks=1 00:29:53.870 --rc geninfo_unexecuted_blocks=1 00:29:53.870 00:29:53.870 ' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.870 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:53.871 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:53.871 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.871 12:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.014 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:02.015 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:02.015 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:02.015 Found net devices under 0000:31:00.0: cvl_0_0 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:02.015 Found net devices under 0000:31:00.1: cvl_0_1 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.015 12:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:02.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:30:02.015 00:30:02.015 --- 10.0.0.2 ping statistics --- 00:30:02.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.015 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:30:02.015 00:30:02.015 --- 10.0.0.1 ping statistics --- 00:30:02.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.015 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:02.015 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2128001 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2128001 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2128001 ']' 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:02.016 12:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:02.016 [2024-10-11 12:07:04.253068] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:02.016 [2024-10-11 12:07:04.254181] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:30:02.016 [2024-10-11 12:07:04.254232] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.016 [2024-10-11 12:07:04.345312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:02.016 [2024-10-11 12:07:04.397829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.016 [2024-10-11 12:07:04.397879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.016 [2024-10-11 12:07:04.397889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.016 [2024-10-11 12:07:04.397896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.016 [2024-10-11 12:07:04.397903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.016 [2024-10-11 12:07:04.399734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.016 [2024-10-11 12:07:04.399892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.016 [2024-10-11 12:07:04.399893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.016 [2024-10-11 12:07:04.475491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.016 [2024-10-11 12:07:04.476467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:02.016 [2024-10-11 12:07:04.476869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:02.016 [2024-10-11 12:07:04.477043] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:02.587 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:02.587 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:30:02.587 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:02.587 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.587 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:02.587 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.587 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:02.587 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:02.587 [2024-10-11 12:07:05.276781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.848 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:02.848 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.109 [2024-10-11 12:07:05.661437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.109 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:03.380 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:03.380 Malloc0 00:30:03.644 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:03.644 Delay0 00:30:03.644 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.904 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:03.904 NULL1 00:30:04.166 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:04.166 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2128675 00:30:04.166 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:04.166 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:04.166 12:07:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.551 Read completed with error (sct=0, sc=11) 00:30:05.551 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.552 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:05.552 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:05.813 true 00:30:05.813 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:05.813 12:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.756 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.756 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:06.756 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:07.017 true 00:30:07.017 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:07.017 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.277 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.277 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:07.277 12:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:07.573 true 00:30:07.573 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:07.574 12:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.558 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.819 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:08.819 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:09.079 true 00:30:09.079 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:09.079 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.022 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.022 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:10.022 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:10.283 true 00:30:10.283 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:10.283 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.283 12:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.543 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:10.543 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:10.804 true 00:30:10.804 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:10.804 12:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.190 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.190 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:12.190 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:12.190 true 00:30:12.451 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:12.451 12:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.391 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.391 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:13.391 12:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:13.391 true 00:30:13.652 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:13.652 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.652 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.913 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:13.913 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:14.174 true 00:30:14.174 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:14.174 12:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:15.377 12:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:15.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:15.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:15.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:15.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:15.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:15.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:15.377 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:15.377 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:15.658 true 00:30:15.658 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:15.658 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.602 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.602 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:16.602 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:16.863 true 00:30:16.863 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:16.863 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.124 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.124 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:17.124 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:17.384 true 00:30:17.384 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:17.385 12:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.645 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.645 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:17.645 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:17.906 true 00:30:17.906 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:17.906 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.168 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.428 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:18.428 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:18.428 true 00:30:18.428 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:18.428 12:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.811 12:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.811 12:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:19.811 12:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:20.071 true 00:30:20.071 12:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:20.071 12:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.011 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.011 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:21.011 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:21.271 true 00:30:21.271 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:21.271 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.271 12:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.531 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:21.531 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:21.791 true 00:30:21.791 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:21.791 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.791 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.051 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:22.051 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:22.312 true 00:30:22.312 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:22.312 12:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.572 12:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.572 12:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:22.572 12:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:22.833 true 00:30:22.833 12:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:22.833 12:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.214 12:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:24.214 12:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:24.214 12:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:24.214 true 00:30:24.214 12:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:24.214 12:07:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.154 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.415 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:25.415 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:25.415 true 00:30:25.415 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:25.415 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.676 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.935 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:25.936 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:25.936 true 00:30:25.936 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:25.936 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.320 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.320 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:27.320 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:27.579 true 00:30:27.579 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:27.579 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.840 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.840 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:27.840 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:28.101 true 00:30:28.101 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:28.101 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.362 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.622 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:28.622 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:28.622 true 00:30:28.622 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:28.623 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.883 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.143 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:29.143 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:29.143 true 00:30:29.143 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:29.143 12:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.523 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.523 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:30.523 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:30.783 true 00:30:30.783 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:30.783 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.724 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:31.724 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.724 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:31.724 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:31.986 true 00:30:31.986 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:31.986 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.245 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.245 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:32.245 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:32.505 true 00:30:32.505 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:32.505 12:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.885 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.885 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:33.885 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:34.144 true 00:30:34.144 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:34.144 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.082 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.082 Initializing NVMe Controllers 00:30:35.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.082 Controller IO queue size 128, less than required. 00:30:35.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.082 Controller IO queue size 128, less than required. 00:30:35.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:35.082 Initialization complete. Launching workers. 00:30:35.082 ======================================================== 00:30:35.082 Latency(us) 00:30:35.082 Device Information : IOPS MiB/s Average min max 00:30:35.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2375.73 1.16 34778.21 1505.37 1012321.67 00:30:35.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18840.21 9.20 6793.86 1209.83 301070.36 00:30:35.082 ======================================================== 00:30:35.082 Total : 21215.94 10.36 9927.51 1209.83 1012321.67 00:30:35.082 00:30:35.082 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:35.082 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:35.082 true 00:30:35.343 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2128675 00:30:35.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2128675) - No such process 00:30:35.343 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2128675 00:30:35.343 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.343 12:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:35.604 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:35.604 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:35.604 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:35.604 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:35.604 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:35.604 null0 00:30:35.604 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:35.604 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:35.604 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:35.865 null1 00:30:35.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:35.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:35.865 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:36.149 null2 00:30:36.149 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.149 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.149 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:36.149 null3 00:30:36.149 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.149 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.149 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:36.420 null4 00:30:36.420 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.420 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.420 12:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:36.680 null5 00:30:36.680 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.680 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.680 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:36.680 null6 00:30:36.680 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.680 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.680 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:36.941 null7 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2134786 2134789 2134791 2134793 2134795 2134798 2134800 2134803 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:36.941 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.202 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.203 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.203 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.203 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.203 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.463 12:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.463 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.724 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.725 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.985 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.986 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.245 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.246 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:38.505 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.505 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.505 12:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:38.505 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.506 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.766 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.027 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:39.288 12:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.550 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.811 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:39.812 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:39.812 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.072 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.072 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.072 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:40.073 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:40.334 12:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:40.334 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:40.334 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.334 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.334 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:40.334 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.334 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.594 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.853 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.853 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.853 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:40.853 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:40.853 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:40.853 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:40.853 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:40.853 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.854 rmmod nvme_tcp 00:30:40.854 rmmod nvme_fabrics 00:30:40.854 rmmod nvme_keyring 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2128001 ']' 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2128001 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2128001 ']' 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2128001 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2128001 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2128001' 00:30:40.854 killing process with pid 2128001 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2128001 00:30:40.854 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2128001 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.113 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.041 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.041 00:30:43.041 real 0m49.431s 00:30:43.041 user 2m56.344s 00:30:43.041 sys 0m21.239s 00:30:43.041 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:43.041 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:43.041 ************************************ 00:30:43.041 END TEST nvmf_ns_hotplug_stress 00:30:43.041 ************************************ 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:43.301 ************************************ 00:30:43.301 START TEST nvmf_delete_subsystem 00:30:43.301 ************************************ 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:43.301 * Looking for test storage... 00:30:43.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:43.301 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:43.301 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.561 --rc genhtml_branch_coverage=1 00:30:43.561 --rc genhtml_function_coverage=1 00:30:43.561 --rc genhtml_legend=1 00:30:43.561 --rc geninfo_all_blocks=1 00:30:43.561 --rc geninfo_unexecuted_blocks=1 00:30:43.561 00:30:43.561 ' 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.561 --rc genhtml_branch_coverage=1 00:30:43.561 --rc genhtml_function_coverage=1 00:30:43.561 --rc genhtml_legend=1 00:30:43.561 --rc geninfo_all_blocks=1 00:30:43.561 --rc geninfo_unexecuted_blocks=1 00:30:43.561 00:30:43.561 ' 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.561 --rc genhtml_branch_coverage=1 00:30:43.561 --rc genhtml_function_coverage=1 00:30:43.561 --rc genhtml_legend=1 00:30:43.561 --rc geninfo_all_blocks=1 00:30:43.561 --rc geninfo_unexecuted_blocks=1 00:30:43.561 00:30:43.561 ' 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.561 --rc genhtml_branch_coverage=1 00:30:43.561 --rc genhtml_function_coverage=1 00:30:43.561 --rc genhtml_legend=1 00:30:43.561 --rc geninfo_all_blocks=1 00:30:43.561 --rc geninfo_unexecuted_blocks=1 00:30:43.561 00:30:43.561 ' 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.561 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:43.562 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:51.689 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:51.690 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:51.690 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:51.690 Found net devices under 0000:31:00.0: cvl_0_0 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:51.690 Found net devices under 0000:31:00.1: cvl_0_1 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:51.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:30:51.690 00:30:51.690 --- 10.0.0.2 ping statistics --- 00:30:51.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.690 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:30:51.690 00:30:51.690 --- 10.0.0.1 ping statistics --- 00:30:51.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.690 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.690 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2140006 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2140006 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2140006 ']' 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:51.691 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.691 [2024-10-11 12:07:53.704394] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:51.691 [2024-10-11 12:07:53.705490] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:30:51.691 [2024-10-11 12:07:53.705538] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.691 [2024-10-11 12:07:53.795142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:51.691 [2024-10-11 12:07:53.846681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.691 [2024-10-11 12:07:53.846733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.691 [2024-10-11 12:07:53.846742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.691 [2024-10-11 12:07:53.846750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.691 [2024-10-11 12:07:53.846757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.691 [2024-10-11 12:07:53.848338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.691 [2024-10-11 12:07:53.848387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.691 [2024-10-11 12:07:53.925321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:51.691 [2024-10-11 12:07:53.926090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:51.691 [2024-10-11 12:07:53.926328] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.952 [2024-10-11 12:07:54.561422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.952 [2024-10-11 12:07:54.594024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.952 NULL1 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.952 Delay0 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2140355 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:51.952 12:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:52.212 [2024-10-11 12:07:54.704415] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:54.123 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:54.123 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.123 12:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 starting I/O failed: -6 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 [2024-10-11 12:07:56.951374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882570 is same with the state(6) to be set 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Write completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.385 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 starting I/O failed: -6 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 [2024-10-11 12:07:56.954875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44e4000c00 is same with the state(6) to be set 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Write completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:54.386 Read completed with error (sct=0, sc=8) 00:30:55.330 [2024-10-11 12:07:57.926273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x880e20 is same with the state(6) to be set 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 [2024-10-11 12:07:57.954948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8828a0 is same with the state(6) to be set 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 [2024-10-11 12:07:57.955576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x882390 is same with the state(6) to be set 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 [2024-10-11 12:07:57.956445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44e400cfe0 is same with the state(6) to be set 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 Read completed with error (sct=0, sc=8) 00:30:55.330 Write completed with error (sct=0, sc=8) 00:30:55.330 [2024-10-11 12:07:57.956564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f44e400d780 is same with the state(6) to be set 00:30:55.330 Initializing NVMe Controllers 00:30:55.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.330 Controller IO queue size 128, less than required. 00:30:55.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:55.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:55.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:55.330 Initialization complete. Launching workers. 00:30:55.330 ======================================================== 00:30:55.330 Latency(us) 00:30:55.330 Device Information : IOPS MiB/s Average min max 00:30:55.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.09 0.08 893033.45 358.52 1009254.37 00:30:55.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.68 0.07 959850.20 314.45 2001623.51 00:30:55.330 ======================================================== 00:30:55.330 Total : 322.78 0.16 924640.14 314.45 2001623.51 00:30:55.330 00:30:55.330 [2024-10-11 12:07:57.957310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x880e20 (9): Bad file descriptor 00:30:55.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:55.330 12:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.330 12:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:55.330 12:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2140355 00:30:55.330 12:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:55.903 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:55.903 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2140355 00:30:55.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2140355) - No such process 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2140355 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2140355 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2140355 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:55.904 [2024-10-11 12:07:58.489884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2141028 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2141028 00:30:55.904 12:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:55.904 [2024-10-11 12:07:58.582372] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:56.474 12:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:56.474 12:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2141028 00:30:56.474 12:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:57.044 12:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:57.044 12:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2141028 00:30:57.044 12:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:57.617 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:57.617 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2141028 00:30:57.617 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:57.913 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:57.913 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2141028 00:30:57.914 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:58.555 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:58.556 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2141028 00:30:58.556 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:59.131 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:59.131 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2141028 00:30:59.131 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:59.131 Initializing NVMe Controllers 00:30:59.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.131 Controller IO queue size 128, less than required. 00:30:59.131 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:59.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:59.131 Initialization complete. Launching workers. 00:30:59.131 ======================================================== 00:30:59.131 Latency(us) 00:30:59.131 Device Information : IOPS MiB/s Average min max 00:30:59.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002474.88 1000311.39 1041402.17 00:30:59.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004638.36 1000316.79 1042384.59 00:30:59.131 ======================================================== 00:30:59.131 Total : 256.00 0.12 1003556.62 1000311.39 1042384.59 00:30:59.131 00:30:59.391 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:59.391 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2141028 00:30:59.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2141028) - No such process 00:30:59.391 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2141028 00:30:59.392 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:59.392 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:59.392 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:59.392 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:59.392 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.392 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:59.392 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.392 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.392 rmmod nvme_tcp 00:30:59.392 rmmod nvme_fabrics 00:30:59.392 rmmod nvme_keyring 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2140006 ']' 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2140006 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2140006 ']' 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2140006 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2140006 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2140006' 00:30:59.653 killing process with pid 2140006 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2140006 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2140006 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.653 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:02.197 00:31:02.197 real 0m18.559s 00:31:02.197 user 0m26.731s 00:31:02.197 sys 0m7.756s 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.197 ************************************ 00:31:02.197 END TEST nvmf_delete_subsystem 00:31:02.197 ************************************ 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:02.197 ************************************ 00:31:02.197 START TEST nvmf_host_management 00:31:02.197 ************************************ 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:02.197 * Looking for test storage... 00:31:02.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:02.197 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:02.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.198 --rc genhtml_branch_coverage=1 00:31:02.198 --rc genhtml_function_coverage=1 00:31:02.198 --rc genhtml_legend=1 00:31:02.198 --rc geninfo_all_blocks=1 00:31:02.198 --rc geninfo_unexecuted_blocks=1 00:31:02.198 00:31:02.198 ' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:02.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.198 --rc genhtml_branch_coverage=1 00:31:02.198 --rc genhtml_function_coverage=1 00:31:02.198 --rc genhtml_legend=1 00:31:02.198 --rc geninfo_all_blocks=1 00:31:02.198 --rc geninfo_unexecuted_blocks=1 00:31:02.198 00:31:02.198 ' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:02.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.198 --rc genhtml_branch_coverage=1 00:31:02.198 --rc genhtml_function_coverage=1 00:31:02.198 --rc genhtml_legend=1 00:31:02.198 --rc geninfo_all_blocks=1 00:31:02.198 --rc geninfo_unexecuted_blocks=1 00:31:02.198 00:31:02.198 ' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:02.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.198 --rc genhtml_branch_coverage=1 00:31:02.198 --rc genhtml_function_coverage=1 00:31:02.198 --rc genhtml_legend=1 00:31:02.198 --rc geninfo_all_blocks=1 00:31:02.198 --rc geninfo_unexecuted_blocks=1 00:31:02.198 00:31:02.198 ' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:02.198 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:10.338 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:10.338 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:10.338 Found net devices under 0000:31:00.0: cvl_0_0 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:10.338 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:10.339 Found net devices under 0000:31:00.1: cvl_0_1 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:31:10.339 00:31:10.339 --- 10.0.0.2 ping statistics --- 00:31:10.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.339 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:31:10.339 00:31:10.339 --- 10.0.0.1 ping statistics --- 00:31:10.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.339 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2146040 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2146040 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2146040 ']' 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:10.339 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.339 [2024-10-11 12:08:12.462311] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:10.339 [2024-10-11 12:08:12.463436] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:10.339 [2024-10-11 12:08:12.463490] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.339 [2024-10-11 12:08:12.552226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:10.339 [2024-10-11 12:08:12.605906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.339 [2024-10-11 12:08:12.605957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.339 [2024-10-11 12:08:12.605965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.339 [2024-10-11 12:08:12.605973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.339 [2024-10-11 12:08:12.605980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.339 [2024-10-11 12:08:12.608373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.339 [2024-10-11 12:08:12.608532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.339 [2024-10-11 12:08:12.608693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.339 [2024-10-11 12:08:12.608693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:10.339 [2024-10-11 12:08:12.684688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:10.339 [2024-10-11 12:08:12.685551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:10.339 [2024-10-11 12:08:12.685873] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:10.339 [2024-10-11 12:08:12.686301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:10.339 [2024-10-11 12:08:12.686360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:10.600 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:10.600 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:31:10.600 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:10.600 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.600 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.861 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.861 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.861 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.861 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.862 [2024-10-11 12:08:13.325563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.862 Malloc0 00:31:10.862 [2024-10-11 12:08:13.425738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2146150 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2146150 /var/tmp/bdevperf.sock 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2146150 ']' 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:10.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:10.862 { 00:31:10.862 "params": { 00:31:10.862 "name": "Nvme$subsystem", 00:31:10.862 "trtype": "$TEST_TRANSPORT", 00:31:10.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.862 "adrfam": "ipv4", 00:31:10.862 "trsvcid": "$NVMF_PORT", 00:31:10.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.862 "hdgst": ${hdgst:-false}, 00:31:10.862 "ddgst": ${ddgst:-false} 00:31:10.862 }, 00:31:10.862 "method": "bdev_nvme_attach_controller" 00:31:10.862 } 00:31:10.862 EOF 00:31:10.862 )") 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:31:10.862 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:10.862 "params": { 00:31:10.862 "name": "Nvme0", 00:31:10.862 "trtype": "tcp", 00:31:10.862 "traddr": "10.0.0.2", 00:31:10.862 "adrfam": "ipv4", 00:31:10.862 "trsvcid": "4420", 00:31:10.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.862 "hdgst": false, 00:31:10.862 "ddgst": false 00:31:10.862 }, 00:31:10.862 "method": "bdev_nvme_attach_controller" 00:31:10.862 }' 00:31:10.862 [2024-10-11 12:08:13.534284] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:10.862 [2024-10-11 12:08:13.534354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146150 ] 00:31:11.123 [2024-10-11 12:08:13.618590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.123 [2024-10-11 12:08:13.672431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.384 Running I/O for 10 seconds... 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=803 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 803 -ge 100 ']' 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:11.958 [2024-10-11 12:08:14.437230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.437381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x817d00 is same with the state(6) to be set 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.958 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:11.958 [2024-10-11 12:08:14.448737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.958 [2024-10-11 12:08:14.448793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.448805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.958 [2024-10-11 12:08:14.448813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.448822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.958 [2024-10-11 12:08:14.448830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.448839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.958 [2024-10-11 12:08:14.448855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.448864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196e540 is same with the state(6) to be set 00:31:11.958 [2024-10-11 12:08:14.449161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.958 [2024-10-11 12:08:14.449439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.958 [2024-10-11 12:08:14.449448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.449983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.449991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.959 [2024-10-11 12:08:14.450212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.959 [2024-10-11 12:08:14.450221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.960 [2024-10-11 12:08:14.450228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.960 [2024-10-11 12:08:14.450238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.960 [2024-10-11 12:08:14.450246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.960 [2024-10-11 12:08:14.450256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.960 [2024-10-11 12:08:14.450264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.960 [2024-10-11 12:08:14.450274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.960 [2024-10-11 12:08:14.450281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.960 [2024-10-11 12:08:14.450290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.960 [2024-10-11 12:08:14.450297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.960 [2024-10-11 12:08:14.450306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.960 [2024-10-11 12:08:14.450321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.960 [2024-10-11 12:08:14.450331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.960 [2024-10-11 12:08:14.450339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.960 [2024-10-11 12:08:14.450348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:11.960 [2024-10-11 12:08:14.450355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.960 [2024-10-11 12:08:14.450440] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b876b0 was disconnected and freed. reset controller. 00:31:11.960 [2024-10-11 12:08:14.451641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:11.960 task offset: 114688 on job bdev=Nvme0n1 fails 00:31:11.960 00:31:11.960 Latency(us) 00:31:11.960 [2024-10-11T10:08:14.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.960 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:11.960 Job: Nvme0n1 ended in about 0.61 seconds with error 00:31:11.960 Verification LBA range: start 0x0 length 0x400 00:31:11.960 Nvme0n1 : 0.61 1463.62 91.48 104.54 0.00 39848.81 2088.96 33860.27 00:31:11.960 [2024-10-11T10:08:14.663Z] =================================================================================================================== 00:31:11.960 [2024-10-11T10:08:14.663Z] Total : 1463.62 91.48 104.54 0.00 39848.81 2088.96 33860.27 00:31:11.960 [2024-10-11 12:08:14.453843] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:11.960 [2024-10-11 12:08:14.453881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196e540 (9): Bad file descriptor 00:31:11.960 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.960 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:11.960 [2024-10-11 12:08:14.546887] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2146150 00:31:12.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2146150) - No such process 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:12.902 { 00:31:12.902 "params": { 00:31:12.902 "name": "Nvme$subsystem", 00:31:12.902 "trtype": "$TEST_TRANSPORT", 00:31:12.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.902 "adrfam": "ipv4", 00:31:12.902 "trsvcid": "$NVMF_PORT", 00:31:12.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.902 "hdgst": ${hdgst:-false}, 00:31:12.902 "ddgst": ${ddgst:-false} 00:31:12.902 }, 00:31:12.902 "method": "bdev_nvme_attach_controller" 00:31:12.902 } 00:31:12.902 EOF 00:31:12.902 )") 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:31:12.902 12:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:12.902 "params": { 00:31:12.902 "name": "Nvme0", 00:31:12.902 "trtype": "tcp", 00:31:12.902 "traddr": "10.0.0.2", 00:31:12.902 "adrfam": "ipv4", 00:31:12.902 "trsvcid": "4420", 00:31:12.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.902 "hdgst": false, 00:31:12.902 "ddgst": false 00:31:12.902 }, 00:31:12.902 "method": "bdev_nvme_attach_controller" 00:31:12.902 }' 00:31:12.902 [2024-10-11 12:08:15.514726] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:12.902 [2024-10-11 12:08:15.514783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146520 ] 00:31:12.902 [2024-10-11 12:08:15.594540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.163 [2024-10-11 12:08:15.629799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.163 Running I/O for 1 seconds... 00:31:14.549 1472.00 IOPS, 92.00 MiB/s 00:31:14.549 Latency(us) 00:31:14.549 [2024-10-11T10:08:17.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.549 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:14.549 Verification LBA range: start 0x0 length 0x400 00:31:14.549 Nvme0n1 : 1.01 1514.99 94.69 0.00 0.00 41504.42 6744.75 33860.27 00:31:14.549 [2024-10-11T10:08:17.252Z] =================================================================================================================== 00:31:14.549 [2024-10-11T10:08:17.252Z] Total : 1514.99 94.69 0.00 0.00 41504.42 6744.75 33860.27 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.549 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.549 rmmod nvme_tcp 00:31:14.549 rmmod nvme_fabrics 00:31:14.549 rmmod nvme_keyring 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2146040 ']' 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2146040 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2146040 ']' 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2146040 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146040 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146040' 00:31:14.549 killing process with pid 2146040 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2146040 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2146040 00:31:14.549 [2024-10-11 12:08:17.189795] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.549 12:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.097 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:17.098 00:31:17.098 real 0m14.848s 00:31:17.098 user 0m19.180s 00:31:17.098 sys 0m7.643s 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.098 ************************************ 00:31:17.098 END TEST nvmf_host_management 00:31:17.098 ************************************ 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:17.098 ************************************ 00:31:17.098 START TEST nvmf_lvol 00:31:17.098 ************************************ 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:17.098 * Looking for test storage... 00:31:17.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:17.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.098 --rc genhtml_branch_coverage=1 00:31:17.098 --rc genhtml_function_coverage=1 00:31:17.098 --rc genhtml_legend=1 00:31:17.098 --rc geninfo_all_blocks=1 00:31:17.098 --rc geninfo_unexecuted_blocks=1 00:31:17.098 00:31:17.098 ' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:17.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.098 --rc genhtml_branch_coverage=1 00:31:17.098 --rc genhtml_function_coverage=1 00:31:17.098 --rc genhtml_legend=1 00:31:17.098 --rc geninfo_all_blocks=1 00:31:17.098 --rc geninfo_unexecuted_blocks=1 00:31:17.098 00:31:17.098 ' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:17.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.098 --rc genhtml_branch_coverage=1 00:31:17.098 --rc genhtml_function_coverage=1 00:31:17.098 --rc genhtml_legend=1 00:31:17.098 --rc geninfo_all_blocks=1 00:31:17.098 --rc geninfo_unexecuted_blocks=1 00:31:17.098 00:31:17.098 ' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:17.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.098 --rc genhtml_branch_coverage=1 00:31:17.098 --rc genhtml_function_coverage=1 00:31:17.098 --rc genhtml_legend=1 00:31:17.098 --rc geninfo_all_blocks=1 00:31:17.098 --rc geninfo_unexecuted_blocks=1 00:31:17.098 00:31:17.098 ' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.098 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.099 12:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:25.241 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:25.241 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:25.241 Found net devices under 0000:31:00.0: cvl_0_0 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:25.241 Found net devices under 0000:31:00.1: cvl_0_1 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.241 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.242 12:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:31:25.242 00:31:25.242 --- 10.0.0.2 ping statistics --- 00:31:25.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.242 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:31:25.242 00:31:25.242 --- 10.0.0.1 ping statistics --- 00:31:25.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.242 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2151225 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2151225 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2151225 ']' 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:25.242 12:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:25.242 [2024-10-11 12:08:27.360919] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.242 [2024-10-11 12:08:27.362715] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:25.242 [2024-10-11 12:08:27.362793] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.242 [2024-10-11 12:08:27.454621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:25.242 [2024-10-11 12:08:27.507198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.242 [2024-10-11 12:08:27.507248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.242 [2024-10-11 12:08:27.507256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.242 [2024-10-11 12:08:27.507263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.242 [2024-10-11 12:08:27.507270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.242 [2024-10-11 12:08:27.509058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.242 [2024-10-11 12:08:27.509224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.242 [2024-10-11 12:08:27.509329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.242 [2024-10-11 12:08:27.585761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.242 [2024-10-11 12:08:27.586693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:25.242 [2024-10-11 12:08:27.586986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:25.242 [2024-10-11 12:08:27.587167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:25.503 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:25.503 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:31:25.503 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:25.503 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:25.503 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:25.764 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.764 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:25.764 [2024-10-11 12:08:28.378309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.764 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.025 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:26.025 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.286 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:26.286 12:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:26.548 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:26.548 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=968ccb4c-53ca-4a23-b606-6877abc13e2a 00:31:26.548 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 968ccb4c-53ca-4a23-b606-6877abc13e2a lvol 20 00:31:26.808 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b21aed56-be2a-46cd-af1b-99411985c731 00:31:26.808 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:27.069 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b21aed56-be2a-46cd-af1b-99411985c731 00:31:27.331 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.331 [2024-10-11 12:08:29.958177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.331 12:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:27.592 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2151710 00:31:27.592 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:27.592 12:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:28.535 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b21aed56-be2a-46cd-af1b-99411985c731 MY_SNAPSHOT 00:31:28.797 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2c9c9a6a-597b-49d5-a0a9-17b94e7100d5 00:31:28.797 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b21aed56-be2a-46cd-af1b-99411985c731 30 00:31:29.057 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2c9c9a6a-597b-49d5-a0a9-17b94e7100d5 MY_CLONE 00:31:29.318 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8f9f90ca-3642-440e-819b-56b5a38a0d5c 00:31:29.318 12:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8f9f90ca-3642-440e-819b-56b5a38a0d5c 00:31:29.889 12:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2151710 00:31:38.032 Initializing NVMe Controllers 00:31:38.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:38.032 Controller IO queue size 128, less than required. 00:31:38.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:38.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:38.032 Initialization complete. Launching workers. 00:31:38.032 ======================================================== 00:31:38.032 Latency(us) 00:31:38.032 Device Information : IOPS MiB/s Average min max 00:31:38.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15906.50 62.13 8047.51 771.23 62483.58 00:31:38.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15468.40 60.42 8277.11 3031.65 65067.50 00:31:38.032 ======================================================== 00:31:38.032 Total : 31374.90 122.56 8160.71 771.23 65067.50 00:31:38.032 00:31:38.032 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:38.032 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b21aed56-be2a-46cd-af1b-99411985c731 00:31:38.294 12:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 968ccb4c-53ca-4a23-b606-6877abc13e2a 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.555 rmmod nvme_tcp 00:31:38.555 rmmod nvme_fabrics 00:31:38.555 rmmod nvme_keyring 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2151225 ']' 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2151225 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2151225 ']' 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2151225 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2151225 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2151225' 00:31:38.555 killing process with pid 2151225 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2151225 00:31:38.555 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2151225 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.816 12:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.731 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.731 00:31:40.731 real 0m23.962s 00:31:40.731 user 0m55.583s 00:31:40.731 sys 0m10.973s 00:31:40.731 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.731 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:40.731 ************************************ 00:31:40.731 END TEST nvmf_lvol 00:31:40.731 ************************************ 00:31:40.731 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:40.731 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:40.731 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.731 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.731 ************************************ 00:31:40.731 START TEST nvmf_lvs_grow 00:31:40.731 ************************************ 00:31:40.731 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:40.993 * Looking for test storage... 00:31:40.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:40.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.993 --rc genhtml_branch_coverage=1 00:31:40.993 --rc genhtml_function_coverage=1 00:31:40.993 --rc genhtml_legend=1 00:31:40.993 --rc geninfo_all_blocks=1 00:31:40.993 --rc geninfo_unexecuted_blocks=1 00:31:40.993 00:31:40.993 ' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:40.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.993 --rc genhtml_branch_coverage=1 00:31:40.993 --rc genhtml_function_coverage=1 00:31:40.993 --rc genhtml_legend=1 00:31:40.993 --rc geninfo_all_blocks=1 00:31:40.993 --rc geninfo_unexecuted_blocks=1 00:31:40.993 00:31:40.993 ' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:40.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.993 --rc genhtml_branch_coverage=1 00:31:40.993 --rc genhtml_function_coverage=1 00:31:40.993 --rc genhtml_legend=1 00:31:40.993 --rc geninfo_all_blocks=1 00:31:40.993 --rc geninfo_unexecuted_blocks=1 00:31:40.993 00:31:40.993 ' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:40.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.993 --rc genhtml_branch_coverage=1 00:31:40.993 --rc genhtml_function_coverage=1 00:31:40.993 --rc genhtml_legend=1 00:31:40.993 --rc geninfo_all_blocks=1 00:31:40.993 --rc geninfo_unexecuted_blocks=1 00:31:40.993 00:31:40.993 ' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.993 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.994 12:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.137 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:49.138 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:49.138 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:49.138 12:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:49.138 Found net devices under 0000:31:00.0: cvl_0_0 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:49.138 Found net devices under 0000:31:00.1: cvl_0_1 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:31:49.138 00:31:49.138 --- 10.0.0.2 ping statistics --- 00:31:49.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.138 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:31:49.138 00:31:49.138 --- 10.0.0.1 ping statistics --- 00:31:49.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.138 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:49.138 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2158006 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2158006 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2158006 ']' 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:49.139 12:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:49.139 [2024-10-11 12:08:51.419870] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:49.139 [2024-10-11 12:08:51.421001] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:49.139 [2024-10-11 12:08:51.421050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.139 [2024-10-11 12:08:51.511234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.139 [2024-10-11 12:08:51.562350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.139 [2024-10-11 12:08:51.562401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.139 [2024-10-11 12:08:51.562409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.139 [2024-10-11 12:08:51.562417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.139 [2024-10-11 12:08:51.562423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.139 [2024-10-11 12:08:51.563238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.139 [2024-10-11 12:08:51.639960] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:49.139 [2024-10-11 12:08:51.640253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:49.711 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:49.711 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:31:49.711 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:49.711 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:49.711 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:49.711 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.711 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:49.972 [2024-10-11 12:08:52.444150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:49.972 ************************************ 00:31:49.972 START TEST lvs_grow_clean 00:31:49.972 ************************************ 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:49.972 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:50.234 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:50.234 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:50.234 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:31:50.234 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:31:50.234 12:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:50.495 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:50.495 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:50.495 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 lvol 150 00:31:50.756 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d199bf98-1f25-4c2f-9ab6-45b23ec5049d 00:31:50.756 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:50.756 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:50.756 [2024-10-11 12:08:53.427775] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:50.756 [2024-10-11 12:08:53.427935] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:50.756 true 00:31:50.756 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:31:50.756 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:51.016 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:51.016 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:51.277 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d199bf98-1f25-4c2f-9ab6-45b23ec5049d 00:31:51.538 12:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:51.538 [2024-10-11 12:08:54.156463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.538 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2158712 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2158712 /var/tmp/bdevperf.sock 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2158712 ']' 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:51.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.799 12:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:51.799 [2024-10-11 12:08:54.415670] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:51.799 [2024-10-11 12:08:54.415740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158712 ] 00:31:51.799 [2024-10-11 12:08:54.499314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.060 [2024-10-11 12:08:54.551368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.632 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:52.632 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:31:52.632 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:52.893 Nvme0n1 00:31:52.893 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:53.154 [ 00:31:53.154 { 00:31:53.154 "name": "Nvme0n1", 00:31:53.154 "aliases": [ 00:31:53.154 "d199bf98-1f25-4c2f-9ab6-45b23ec5049d" 00:31:53.154 ], 00:31:53.154 "product_name": "NVMe disk", 00:31:53.154 "block_size": 4096, 00:31:53.154 "num_blocks": 38912, 00:31:53.154 "uuid": "d199bf98-1f25-4c2f-9ab6-45b23ec5049d", 00:31:53.154 "numa_id": 0, 00:31:53.154 "assigned_rate_limits": { 00:31:53.154 "rw_ios_per_sec": 0, 00:31:53.154 "rw_mbytes_per_sec": 0, 00:31:53.154 "r_mbytes_per_sec": 0, 00:31:53.154 "w_mbytes_per_sec": 0 00:31:53.154 }, 00:31:53.154 "claimed": false, 00:31:53.154 "zoned": false, 00:31:53.154 "supported_io_types": { 00:31:53.154 "read": true, 00:31:53.154 "write": true, 00:31:53.154 "unmap": true, 00:31:53.154 "flush": true, 00:31:53.154 "reset": true, 00:31:53.154 "nvme_admin": true, 00:31:53.154 "nvme_io": true, 00:31:53.154 "nvme_io_md": false, 00:31:53.154 "write_zeroes": true, 00:31:53.154 "zcopy": false, 00:31:53.154 "get_zone_info": false, 00:31:53.154 "zone_management": false, 00:31:53.154 "zone_append": false, 00:31:53.154 "compare": true, 00:31:53.154 "compare_and_write": true, 00:31:53.154 "abort": true, 00:31:53.154 "seek_hole": false, 00:31:53.154 "seek_data": false, 00:31:53.154 "copy": true, 00:31:53.154 "nvme_iov_md": false 00:31:53.154 }, 00:31:53.154 "memory_domains": [ 00:31:53.154 { 00:31:53.154 "dma_device_id": "system", 00:31:53.154 "dma_device_type": 1 00:31:53.154 } 00:31:53.154 ], 00:31:53.154 "driver_specific": { 00:31:53.154 "nvme": [ 00:31:53.154 { 00:31:53.154 "trid": { 00:31:53.154 "trtype": "TCP", 00:31:53.154 "adrfam": "IPv4", 00:31:53.154 "traddr": "10.0.0.2", 00:31:53.154 "trsvcid": "4420", 00:31:53.154 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:53.154 }, 00:31:53.154 "ctrlr_data": { 00:31:53.154 "cntlid": 1, 00:31:53.154 "vendor_id": "0x8086", 00:31:53.154 "model_number": "SPDK bdev Controller", 00:31:53.154 "serial_number": "SPDK0", 00:31:53.154 "firmware_revision": "25.01", 00:31:53.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:53.154 "oacs": { 00:31:53.154 "security": 0, 00:31:53.154 "format": 0, 00:31:53.154 "firmware": 0, 00:31:53.154 "ns_manage": 0 00:31:53.154 }, 00:31:53.154 "multi_ctrlr": true, 00:31:53.154 "ana_reporting": false 00:31:53.154 }, 00:31:53.155 "vs": { 00:31:53.155 "nvme_version": "1.3" 00:31:53.155 }, 00:31:53.155 "ns_data": { 00:31:53.155 "id": 1, 00:31:53.155 "can_share": true 00:31:53.155 } 00:31:53.155 } 00:31:53.155 ], 00:31:53.155 "mp_policy": "active_passive" 00:31:53.155 } 00:31:53.155 } 00:31:53.155 ] 00:31:53.155 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2158975 00:31:53.155 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:53.155 12:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:53.155 Running I/O for 10 seconds... 00:31:54.095 Latency(us) 00:31:54.095 [2024-10-11T10:08:56.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.095 Nvme0n1 : 1.00 16930.00 66.13 0.00 0.00 0.00 0.00 0.00 00:31:54.095 [2024-10-11T10:08:56.798Z] =================================================================================================================== 00:31:54.095 [2024-10-11T10:08:56.798Z] Total : 16930.00 66.13 0.00 0.00 0.00 0.00 0.00 00:31:54.095 00:31:55.038 12:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:31:55.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.300 Nvme0n1 : 2.00 17109.50 66.83 0.00 0.00 0.00 0.00 0.00 00:31:55.300 [2024-10-11T10:08:58.003Z] =================================================================================================================== 00:31:55.300 [2024-10-11T10:08:58.003Z] Total : 17109.50 66.83 0.00 0.00 0.00 0.00 0.00 00:31:55.300 00:31:55.300 true 00:31:55.300 12:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:31:55.300 12:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:55.560 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:55.560 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:55.560 12:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2158975 00:31:56.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.131 Nvme0n1 : 3.00 17335.67 67.72 0.00 0.00 0.00 0.00 0.00 00:31:56.131 [2024-10-11T10:08:58.834Z] =================================================================================================================== 00:31:56.131 [2024-10-11T10:08:58.834Z] Total : 17335.67 67.72 0.00 0.00 0.00 0.00 0.00 00:31:56.131 00:31:57.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.514 Nvme0n1 : 4.00 17705.75 69.16 0.00 0.00 0.00 0.00 0.00 00:31:57.514 [2024-10-11T10:09:00.217Z] =================================================================================================================== 00:31:57.514 [2024-10-11T10:09:00.217Z] Total : 17705.75 69.16 0.00 0.00 0.00 0.00 0.00 00:31:57.514 00:31:58.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.456 Nvme0n1 : 5.00 19233.20 75.13 0.00 0.00 0.00 0.00 0.00 00:31:58.456 [2024-10-11T10:09:01.159Z] =================================================================================================================== 00:31:58.456 [2024-10-11T10:09:01.159Z] Total : 19233.20 75.13 0.00 0.00 0.00 0.00 0.00 00:31:58.456 00:31:59.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.467 Nvme0n1 : 6.00 20251.50 79.11 0.00 0.00 0.00 0.00 0.00 00:31:59.467 [2024-10-11T10:09:02.170Z] =================================================================================================================== 00:31:59.467 [2024-10-11T10:09:02.170Z] Total : 20251.50 79.11 0.00 0.00 0.00 0.00 0.00 00:31:59.467 00:32:00.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.446 Nvme0n1 : 7.00 20997.14 82.02 0.00 0.00 0.00 0.00 0.00 00:32:00.446 [2024-10-11T10:09:03.149Z] =================================================================================================================== 00:32:00.446 [2024-10-11T10:09:03.149Z] Total : 20997.14 82.02 0.00 0.00 0.00 0.00 0.00 00:32:00.446 00:32:01.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.387 Nvme0n1 : 8.00 21556.50 84.21 0.00 0.00 0.00 0.00 0.00 00:32:01.387 [2024-10-11T10:09:04.090Z] =================================================================================================================== 00:32:01.387 [2024-10-11T10:09:04.090Z] Total : 21556.50 84.21 0.00 0.00 0.00 0.00 0.00 00:32:01.387 00:32:02.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.328 Nvme0n1 : 9.00 21991.56 85.90 0.00 0.00 0.00 0.00 0.00 00:32:02.328 [2024-10-11T10:09:05.031Z] =================================================================================================================== 00:32:02.328 [2024-10-11T10:09:05.031Z] Total : 21991.56 85.90 0.00 0.00 0.00 0.00 0.00 00:32:02.328 00:32:03.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.270 Nvme0n1 : 10.00 22341.20 87.27 0.00 0.00 0.00 0.00 0.00 00:32:03.270 [2024-10-11T10:09:05.973Z] =================================================================================================================== 00:32:03.270 [2024-10-11T10:09:05.973Z] Total : 22341.20 87.27 0.00 0.00 0.00 0.00 0.00 00:32:03.270 00:32:03.270 00:32:03.270 Latency(us) 00:32:03.270 [2024-10-11T10:09:05.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.270 Nvme0n1 : 10.00 22344.84 87.28 0.00 0.00 5724.92 3850.24 31894.19 00:32:03.270 [2024-10-11T10:09:05.973Z] =================================================================================================================== 00:32:03.270 [2024-10-11T10:09:05.973Z] Total : 22344.84 87.28 0.00 0.00 5724.92 3850.24 31894.19 00:32:03.270 { 00:32:03.270 "results": [ 00:32:03.270 { 00:32:03.270 "job": "Nvme0n1", 00:32:03.270 "core_mask": "0x2", 00:32:03.270 "workload": "randwrite", 00:32:03.270 "status": "finished", 00:32:03.270 "queue_depth": 128, 00:32:03.270 "io_size": 4096, 00:32:03.270 "runtime": 10.003337, 00:32:03.270 "iops": 22344.843525715467, 00:32:03.270 "mibps": 87.28454502232604, 00:32:03.270 "io_failed": 0, 00:32:03.270 "io_timeout": 0, 00:32:03.270 "avg_latency_us": 5724.921285893026, 00:32:03.270 "min_latency_us": 3850.24, 00:32:03.270 "max_latency_us": 31894.18666666667 00:32:03.270 } 00:32:03.270 ], 00:32:03.270 "core_count": 1 00:32:03.270 } 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2158712 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2158712 ']' 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2158712 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2158712 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2158712' 00:32:03.270 killing process with pid 2158712 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2158712 00:32:03.270 Received shutdown signal, test time was about 10.000000 seconds 00:32:03.270 00:32:03.270 Latency(us) 00:32:03.270 [2024-10-11T10:09:05.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.270 [2024-10-11T10:09:05.973Z] =================================================================================================================== 00:32:03.270 [2024-10-11T10:09:05.973Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:03.270 12:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2158712 00:32:03.531 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:03.531 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:03.791 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:32:03.791 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:04.053 [2024-10-11 12:09:06.703844] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:04.053 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:32:04.314 request: 00:32:04.314 { 00:32:04.314 "uuid": "9a3d4bc7-de20-42c8-aaf6-96ac3b79d621", 00:32:04.314 "method": "bdev_lvol_get_lvstores", 00:32:04.314 "req_id": 1 00:32:04.314 } 00:32:04.314 Got JSON-RPC error response 00:32:04.314 response: 00:32:04.314 { 00:32:04.314 "code": -19, 00:32:04.314 "message": "No such device" 00:32:04.314 } 00:32:04.314 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:04.314 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:04.314 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:04.314 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:04.314 12:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:04.575 aio_bdev 00:32:04.575 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d199bf98-1f25-4c2f-9ab6-45b23ec5049d 00:32:04.575 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d199bf98-1f25-4c2f-9ab6-45b23ec5049d 00:32:04.575 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:04.575 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:32:04.575 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:04.575 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:04.575 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:04.837 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d199bf98-1f25-4c2f-9ab6-45b23ec5049d -t 2000 00:32:04.837 [ 00:32:04.837 { 00:32:04.837 "name": "d199bf98-1f25-4c2f-9ab6-45b23ec5049d", 00:32:04.837 "aliases": [ 00:32:04.837 "lvs/lvol" 00:32:04.837 ], 00:32:04.837 "product_name": "Logical Volume", 00:32:04.837 "block_size": 4096, 00:32:04.837 "num_blocks": 38912, 00:32:04.837 "uuid": "d199bf98-1f25-4c2f-9ab6-45b23ec5049d", 00:32:04.837 "assigned_rate_limits": { 00:32:04.837 "rw_ios_per_sec": 0, 00:32:04.837 "rw_mbytes_per_sec": 0, 00:32:04.837 "r_mbytes_per_sec": 0, 00:32:04.837 "w_mbytes_per_sec": 0 00:32:04.837 }, 00:32:04.837 "claimed": false, 00:32:04.837 "zoned": false, 00:32:04.837 "supported_io_types": { 00:32:04.837 "read": true, 00:32:04.837 "write": true, 00:32:04.837 "unmap": true, 00:32:04.837 "flush": false, 00:32:04.837 "reset": true, 00:32:04.837 "nvme_admin": false, 00:32:04.837 "nvme_io": false, 00:32:04.837 "nvme_io_md": false, 00:32:04.837 "write_zeroes": true, 00:32:04.837 "zcopy": false, 00:32:04.837 "get_zone_info": false, 00:32:04.837 "zone_management": false, 00:32:04.837 "zone_append": false, 00:32:04.837 "compare": false, 00:32:04.837 "compare_and_write": false, 00:32:04.837 "abort": false, 00:32:04.837 "seek_hole": true, 00:32:04.837 "seek_data": true, 00:32:04.837 "copy": false, 00:32:04.837 "nvme_iov_md": false 00:32:04.837 }, 00:32:04.837 "driver_specific": { 00:32:04.837 "lvol": { 00:32:04.837 "lvol_store_uuid": "9a3d4bc7-de20-42c8-aaf6-96ac3b79d621", 00:32:04.837 "base_bdev": "aio_bdev", 00:32:04.837 "thin_provision": false, 00:32:04.837 "num_allocated_clusters": 38, 00:32:04.837 "snapshot": false, 00:32:04.837 "clone": false, 00:32:04.837 "esnap_clone": false 00:32:04.837 } 00:32:04.837 } 00:32:04.837 } 00:32:04.837 ] 00:32:04.837 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:32:04.837 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:32:04.837 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:05.098 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:05.098 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:05.098 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:32:05.359 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:05.359 12:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d199bf98-1f25-4c2f-9ab6-45b23ec5049d 00:32:05.359 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a3d4bc7-de20-42c8-aaf6-96ac3b79d621 00:32:05.620 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:05.882 00:32:05.882 real 0m15.969s 00:32:05.882 user 0m15.576s 00:32:05.882 sys 0m1.519s 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:05.882 ************************************ 00:32:05.882 END TEST lvs_grow_clean 00:32:05.882 ************************************ 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:05.882 ************************************ 00:32:05.882 START TEST lvs_grow_dirty 00:32:05.882 ************************************ 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:05.882 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:06.143 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:06.143 12:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:06.404 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:06.404 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:06.404 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:06.665 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:06.665 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:06.665 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e lvol 150 00:32:06.926 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b6600899-ef9a-4962-beec-58457ba117b4 00:32:06.926 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:06.926 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:06.926 [2024-10-11 12:09:09.543781] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:06.926 [2024-10-11 12:09:09.543947] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:06.926 true 00:32:06.926 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:06.926 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:07.187 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:07.187 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:07.449 12:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b6600899-ef9a-4962-beec-58457ba117b4 00:32:07.449 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:07.710 [2024-10-11 12:09:10.308393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.710 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2162296 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2162296 /var/tmp/bdevperf.sock 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2162296 ']' 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:07.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.970 12:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:07.970 [2024-10-11 12:09:10.558913] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:07.970 [2024-10-11 12:09:10.558970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162296 ] 00:32:07.970 [2024-10-11 12:09:10.635846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.970 [2024-10-11 12:09:10.665988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.911 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.911 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:32:08.911 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:08.911 Nvme0n1 00:32:09.171 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:09.171 [ 00:32:09.171 { 00:32:09.171 "name": "Nvme0n1", 00:32:09.171 "aliases": [ 00:32:09.171 "b6600899-ef9a-4962-beec-58457ba117b4" 00:32:09.171 ], 00:32:09.171 "product_name": "NVMe disk", 00:32:09.171 "block_size": 4096, 00:32:09.171 "num_blocks": 38912, 00:32:09.171 "uuid": "b6600899-ef9a-4962-beec-58457ba117b4", 00:32:09.171 "numa_id": 0, 00:32:09.171 "assigned_rate_limits": { 00:32:09.171 "rw_ios_per_sec": 0, 00:32:09.171 "rw_mbytes_per_sec": 0, 00:32:09.171 "r_mbytes_per_sec": 0, 00:32:09.171 "w_mbytes_per_sec": 0 00:32:09.171 }, 00:32:09.171 "claimed": false, 00:32:09.171 "zoned": false, 00:32:09.171 "supported_io_types": { 00:32:09.171 "read": true, 00:32:09.171 "write": true, 00:32:09.171 "unmap": true, 00:32:09.171 "flush": true, 00:32:09.171 "reset": true, 00:32:09.171 "nvme_admin": true, 00:32:09.171 "nvme_io": true, 00:32:09.171 "nvme_io_md": false, 00:32:09.171 "write_zeroes": true, 00:32:09.171 "zcopy": false, 00:32:09.171 "get_zone_info": false, 00:32:09.171 "zone_management": false, 00:32:09.171 "zone_append": false, 00:32:09.171 "compare": true, 00:32:09.171 "compare_and_write": true, 00:32:09.171 "abort": true, 00:32:09.171 "seek_hole": false, 00:32:09.171 "seek_data": false, 00:32:09.171 "copy": true, 00:32:09.171 "nvme_iov_md": false 00:32:09.172 }, 00:32:09.172 "memory_domains": [ 00:32:09.172 { 00:32:09.172 "dma_device_id": "system", 00:32:09.172 "dma_device_type": 1 00:32:09.172 } 00:32:09.172 ], 00:32:09.172 "driver_specific": { 00:32:09.172 "nvme": [ 00:32:09.172 { 00:32:09.172 "trid": { 00:32:09.172 "trtype": "TCP", 00:32:09.172 "adrfam": "IPv4", 00:32:09.172 "traddr": "10.0.0.2", 00:32:09.172 "trsvcid": "4420", 00:32:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:09.172 }, 00:32:09.172 "ctrlr_data": { 00:32:09.172 "cntlid": 1, 00:32:09.172 "vendor_id": "0x8086", 00:32:09.172 "model_number": "SPDK bdev Controller", 00:32:09.172 "serial_number": "SPDK0", 00:32:09.172 "firmware_revision": "25.01", 00:32:09.172 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:09.172 "oacs": { 00:32:09.172 "security": 0, 00:32:09.172 "format": 0, 00:32:09.172 "firmware": 0, 00:32:09.172 "ns_manage": 0 00:32:09.172 }, 00:32:09.172 "multi_ctrlr": true, 00:32:09.172 "ana_reporting": false 00:32:09.172 }, 00:32:09.172 "vs": { 00:32:09.172 "nvme_version": "1.3" 00:32:09.172 }, 00:32:09.172 "ns_data": { 00:32:09.172 "id": 1, 00:32:09.172 "can_share": true 00:32:09.172 } 00:32:09.172 } 00:32:09.172 ], 00:32:09.172 "mp_policy": "active_passive" 00:32:09.172 } 00:32:09.172 } 00:32:09.172 ] 00:32:09.172 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2162585 00:32:09.172 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:09.172 12:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:09.432 Running I/O for 10 seconds... 00:32:10.375 Latency(us) 00:32:10.375 [2024-10-11T10:09:13.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.375 Nvme0n1 : 1.00 17487.00 68.31 0.00 0.00 0.00 0.00 0.00 00:32:10.375 [2024-10-11T10:09:13.078Z] =================================================================================================================== 00:32:10.375 [2024-10-11T10:09:13.078Z] Total : 17487.00 68.31 0.00 0.00 0.00 0.00 0.00 00:32:10.375 00:32:11.318 12:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:11.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.318 Nvme0n1 : 2.00 17734.00 69.27 0.00 0.00 0.00 0.00 0.00 00:32:11.318 [2024-10-11T10:09:14.021Z] =================================================================================================================== 00:32:11.318 [2024-10-11T10:09:14.021Z] Total : 17734.00 69.27 0.00 0.00 0.00 0.00 0.00 00:32:11.318 00:32:11.318 true 00:32:11.318 12:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:11.318 12:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:11.579 12:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:11.579 12:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:11.579 12:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2162585 00:32:12.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.520 Nvme0n1 : 3.00 17803.33 69.54 0.00 0.00 0.00 0.00 0.00 00:32:12.520 [2024-10-11T10:09:15.223Z] =================================================================================================================== 00:32:12.520 [2024-10-11T10:09:15.223Z] Total : 17803.33 69.54 0.00 0.00 0.00 0.00 0.00 00:32:12.520 00:32:13.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.461 Nvme0n1 : 4.00 17877.00 69.83 0.00 0.00 0.00 0.00 0.00 00:32:13.461 [2024-10-11T10:09:16.164Z] =================================================================================================================== 00:32:13.461 [2024-10-11T10:09:16.164Z] Total : 17877.00 69.83 0.00 0.00 0.00 0.00 0.00 00:32:13.461 00:32:14.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.403 Nvme0n1 : 5.00 18884.00 73.77 0.00 0.00 0.00 0.00 0.00 00:32:14.403 [2024-10-11T10:09:17.106Z] =================================================================================================================== 00:32:14.403 [2024-10-11T10:09:17.106Z] Total : 18884.00 73.77 0.00 0.00 0.00 0.00 0.00 00:32:14.403 00:32:15.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.344 Nvme0n1 : 6.00 19974.00 78.02 0.00 0.00 0.00 0.00 0.00 00:32:15.344 [2024-10-11T10:09:18.047Z] =================================================================================================================== 00:32:15.344 [2024-10-11T10:09:18.047Z] Total : 19974.00 78.02 0.00 0.00 0.00 0.00 0.00 00:32:15.344 00:32:16.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:16.287 Nvme0n1 : 7.00 20756.86 81.08 0.00 0.00 0.00 0.00 0.00 00:32:16.287 [2024-10-11T10:09:18.990Z] =================================================================================================================== 00:32:16.287 [2024-10-11T10:09:18.990Z] Total : 20756.86 81.08 0.00 0.00 0.00 0.00 0.00 00:32:16.287 00:32:17.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:17.227 Nvme0n1 : 8.00 21346.25 83.38 0.00 0.00 0.00 0.00 0.00 00:32:17.227 [2024-10-11T10:09:19.930Z] =================================================================================================================== 00:32:17.227 [2024-10-11T10:09:19.930Z] Total : 21346.25 83.38 0.00 0.00 0.00 0.00 0.00 00:32:17.227 00:32:18.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.611 Nvme0n1 : 9.00 21797.89 85.15 0.00 0.00 0.00 0.00 0.00 00:32:18.611 [2024-10-11T10:09:21.314Z] =================================================================================================================== 00:32:18.611 [2024-10-11T10:09:21.314Z] Total : 21797.89 85.15 0.00 0.00 0.00 0.00 0.00 00:32:18.611 00:32:19.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.552 Nvme0n1 : 10.00 22170.10 86.60 0.00 0.00 0.00 0.00 0.00 00:32:19.552 [2024-10-11T10:09:22.255Z] =================================================================================================================== 00:32:19.552 [2024-10-11T10:09:22.255Z] Total : 22170.10 86.60 0.00 0.00 0.00 0.00 0.00 00:32:19.552 00:32:19.552 00:32:19.552 Latency(us) 00:32:19.552 [2024-10-11T10:09:22.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.552 Nvme0n1 : 10.00 22172.72 86.61 0.00 0.00 5769.22 3153.92 31238.83 00:32:19.552 [2024-10-11T10:09:22.255Z] =================================================================================================================== 00:32:19.552 [2024-10-11T10:09:22.255Z] Total : 22172.72 86.61 0.00 0.00 5769.22 3153.92 31238.83 00:32:19.552 { 00:32:19.552 "results": [ 00:32:19.552 { 00:32:19.552 "job": "Nvme0n1", 00:32:19.552 "core_mask": "0x2", 00:32:19.552 "workload": "randwrite", 00:32:19.552 "status": "finished", 00:32:19.552 "queue_depth": 128, 00:32:19.552 "io_size": 4096, 00:32:19.552 "runtime": 10.004592, 00:32:19.552 "iops": 22172.718287762258, 00:32:19.552 "mibps": 86.61218081157132, 00:32:19.552 "io_failed": 0, 00:32:19.552 "io_timeout": 0, 00:32:19.552 "avg_latency_us": 5769.215941528534, 00:32:19.552 "min_latency_us": 3153.92, 00:32:19.552 "max_latency_us": 31238.826666666668 00:32:19.552 } 00:32:19.552 ], 00:32:19.552 "core_count": 1 00:32:19.552 } 00:32:19.552 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2162296 00:32:19.552 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2162296 ']' 00:32:19.552 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2162296 00:32:19.552 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:32:19.552 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:19.552 12:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2162296 00:32:19.552 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:19.552 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:19.552 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2162296' 00:32:19.552 killing process with pid 2162296 00:32:19.552 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2162296 00:32:19.552 Received shutdown signal, test time was about 10.000000 seconds 00:32:19.552 00:32:19.552 Latency(us) 00:32:19.552 [2024-10-11T10:09:22.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.552 [2024-10-11T10:09:22.255Z] =================================================================================================================== 00:32:19.552 [2024-10-11T10:09:22.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.552 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2162296 00:32:19.552 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:19.812 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:19.813 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:19.813 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2158006 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2158006 00:32:20.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2158006 Killed "${NVMF_APP[@]}" "$@" 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2164688 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2164688 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2164688 ']' 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:20.073 12:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:20.073 [2024-10-11 12:09:22.742487] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:20.073 [2024-10-11 12:09:22.743515] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:20.073 [2024-10-11 12:09:22.743562] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.334 [2024-10-11 12:09:22.829674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.334 [2024-10-11 12:09:22.862990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.334 [2024-10-11 12:09:22.863022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.334 [2024-10-11 12:09:22.863028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.334 [2024-10-11 12:09:22.863033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.334 [2024-10-11 12:09:22.863037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.334 [2024-10-11 12:09:22.863506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.334 [2024-10-11 12:09:22.915083] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:20.334 [2024-10-11 12:09:22.915276] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:20.904 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:20.904 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:32:20.904 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:20.904 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:20.904 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:20.904 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.904 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:21.164 [2024-10-11 12:09:23.733564] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:21.164 [2024-10-11 12:09:23.733780] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:21.164 [2024-10-11 12:09:23.733868] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:21.164 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:21.164 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b6600899-ef9a-4962-beec-58457ba117b4 00:32:21.164 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b6600899-ef9a-4962-beec-58457ba117b4 00:32:21.164 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:21.164 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:21.164 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:21.164 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:21.164 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:21.424 12:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b6600899-ef9a-4962-beec-58457ba117b4 -t 2000 00:32:21.424 [ 00:32:21.424 { 00:32:21.424 "name": "b6600899-ef9a-4962-beec-58457ba117b4", 00:32:21.424 "aliases": [ 00:32:21.424 "lvs/lvol" 00:32:21.424 ], 00:32:21.424 "product_name": "Logical Volume", 00:32:21.425 "block_size": 4096, 00:32:21.425 "num_blocks": 38912, 00:32:21.425 "uuid": "b6600899-ef9a-4962-beec-58457ba117b4", 00:32:21.425 "assigned_rate_limits": { 00:32:21.425 "rw_ios_per_sec": 0, 00:32:21.425 "rw_mbytes_per_sec": 0, 00:32:21.425 "r_mbytes_per_sec": 0, 00:32:21.425 "w_mbytes_per_sec": 0 00:32:21.425 }, 00:32:21.425 "claimed": false, 00:32:21.425 "zoned": false, 00:32:21.425 "supported_io_types": { 00:32:21.425 "read": true, 00:32:21.425 "write": true, 00:32:21.425 "unmap": true, 00:32:21.425 "flush": false, 00:32:21.425 "reset": true, 00:32:21.425 "nvme_admin": false, 00:32:21.425 "nvme_io": false, 00:32:21.425 "nvme_io_md": false, 00:32:21.425 "write_zeroes": true, 00:32:21.425 "zcopy": false, 00:32:21.425 "get_zone_info": false, 00:32:21.425 "zone_management": false, 00:32:21.425 "zone_append": false, 00:32:21.425 "compare": false, 00:32:21.425 "compare_and_write": false, 00:32:21.425 "abort": false, 00:32:21.425 "seek_hole": true, 00:32:21.425 "seek_data": true, 00:32:21.425 "copy": false, 00:32:21.425 "nvme_iov_md": false 00:32:21.425 }, 00:32:21.425 "driver_specific": { 00:32:21.425 "lvol": { 00:32:21.425 "lvol_store_uuid": "292331d5-a3fd-4dbd-8b43-9e0196b8f74e", 00:32:21.425 "base_bdev": "aio_bdev", 00:32:21.425 "thin_provision": false, 00:32:21.425 "num_allocated_clusters": 38, 00:32:21.425 "snapshot": false, 00:32:21.425 "clone": false, 00:32:21.425 "esnap_clone": false 00:32:21.425 } 00:32:21.425 } 00:32:21.425 } 00:32:21.425 ] 00:32:21.425 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:21.425 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:21.425 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:21.685 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:21.685 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:21.685 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:21.945 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:21.945 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:21.945 [2024-10-11 12:09:24.619974] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:21.945 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:21.945 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:21.945 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:21.945 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:22.205 request: 00:32:22.205 { 00:32:22.205 "uuid": "292331d5-a3fd-4dbd-8b43-9e0196b8f74e", 00:32:22.205 "method": "bdev_lvol_get_lvstores", 00:32:22.205 "req_id": 1 00:32:22.205 } 00:32:22.205 Got JSON-RPC error response 00:32:22.205 response: 00:32:22.205 { 00:32:22.205 "code": -19, 00:32:22.205 "message": "No such device" 00:32:22.205 } 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:22.205 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:22.466 aio_bdev 00:32:22.466 12:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b6600899-ef9a-4962-beec-58457ba117b4 00:32:22.466 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b6600899-ef9a-4962-beec-58457ba117b4 00:32:22.466 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:22.466 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:22.466 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:22.466 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:22.466 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:22.726 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b6600899-ef9a-4962-beec-58457ba117b4 -t 2000 00:32:22.726 [ 00:32:22.726 { 00:32:22.726 "name": "b6600899-ef9a-4962-beec-58457ba117b4", 00:32:22.726 "aliases": [ 00:32:22.726 "lvs/lvol" 00:32:22.726 ], 00:32:22.726 "product_name": "Logical Volume", 00:32:22.726 "block_size": 4096, 00:32:22.726 "num_blocks": 38912, 00:32:22.726 "uuid": "b6600899-ef9a-4962-beec-58457ba117b4", 00:32:22.726 "assigned_rate_limits": { 00:32:22.726 "rw_ios_per_sec": 0, 00:32:22.726 "rw_mbytes_per_sec": 0, 00:32:22.726 "r_mbytes_per_sec": 0, 00:32:22.726 "w_mbytes_per_sec": 0 00:32:22.726 }, 00:32:22.726 "claimed": false, 00:32:22.726 "zoned": false, 00:32:22.726 "supported_io_types": { 00:32:22.726 "read": true, 00:32:22.726 "write": true, 00:32:22.726 "unmap": true, 00:32:22.726 "flush": false, 00:32:22.726 "reset": true, 00:32:22.726 "nvme_admin": false, 00:32:22.726 "nvme_io": false, 00:32:22.726 "nvme_io_md": false, 00:32:22.726 "write_zeroes": true, 00:32:22.726 "zcopy": false, 00:32:22.726 "get_zone_info": false, 00:32:22.726 "zone_management": false, 00:32:22.726 "zone_append": false, 00:32:22.726 "compare": false, 00:32:22.726 "compare_and_write": false, 00:32:22.726 "abort": false, 00:32:22.726 "seek_hole": true, 00:32:22.726 "seek_data": true, 00:32:22.726 "copy": false, 00:32:22.726 "nvme_iov_md": false 00:32:22.726 }, 00:32:22.726 "driver_specific": { 00:32:22.726 "lvol": { 00:32:22.726 "lvol_store_uuid": "292331d5-a3fd-4dbd-8b43-9e0196b8f74e", 00:32:22.726 "base_bdev": "aio_bdev", 00:32:22.726 "thin_provision": false, 00:32:22.726 "num_allocated_clusters": 38, 00:32:22.726 "snapshot": false, 00:32:22.726 "clone": false, 00:32:22.726 "esnap_clone": false 00:32:22.726 } 00:32:22.727 } 00:32:22.727 } 00:32:22.727 ] 00:32:22.727 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:22.727 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:22.727 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:22.987 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:22.987 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:22.987 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:23.249 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:23.249 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b6600899-ef9a-4962-beec-58457ba117b4 00:32:23.249 12:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 292331d5-a3fd-4dbd-8b43-9e0196b8f74e 00:32:23.509 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:23.769 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:23.769 00:32:23.769 real 0m17.685s 00:32:23.769 user 0m35.476s 00:32:23.769 sys 0m3.175s 00:32:23.769 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:23.769 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:23.769 ************************************ 00:32:23.769 END TEST lvs_grow_dirty 00:32:23.769 ************************************ 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:23.770 nvmf_trace.0 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:23.770 rmmod nvme_tcp 00:32:23.770 rmmod nvme_fabrics 00:32:23.770 rmmod nvme_keyring 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2164688 ']' 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2164688 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2164688 ']' 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2164688 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.770 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2164688 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2164688' 00:32:24.032 killing process with pid 2164688 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2164688 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2164688 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.032 12:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:26.578 00:32:26.578 real 0m45.266s 00:32:26.578 user 0m54.034s 00:32:26.578 sys 0m11.057s 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:26.578 ************************************ 00:32:26.578 END TEST nvmf_lvs_grow 00:32:26.578 ************************************ 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:26.578 ************************************ 00:32:26.578 START TEST nvmf_bdev_io_wait 00:32:26.578 ************************************ 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:26.578 * Looking for test storage... 00:32:26.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:26.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.578 --rc genhtml_branch_coverage=1 00:32:26.578 --rc genhtml_function_coverage=1 00:32:26.578 --rc genhtml_legend=1 00:32:26.578 --rc geninfo_all_blocks=1 00:32:26.578 --rc geninfo_unexecuted_blocks=1 00:32:26.578 00:32:26.578 ' 00:32:26.578 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:26.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.579 --rc genhtml_branch_coverage=1 00:32:26.579 --rc genhtml_function_coverage=1 00:32:26.579 --rc genhtml_legend=1 00:32:26.579 --rc geninfo_all_blocks=1 00:32:26.579 --rc geninfo_unexecuted_blocks=1 00:32:26.579 00:32:26.579 ' 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:26.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.579 --rc genhtml_branch_coverage=1 00:32:26.579 --rc genhtml_function_coverage=1 00:32:26.579 --rc genhtml_legend=1 00:32:26.579 --rc geninfo_all_blocks=1 00:32:26.579 --rc geninfo_unexecuted_blocks=1 00:32:26.579 00:32:26.579 ' 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:26.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.579 --rc genhtml_branch_coverage=1 00:32:26.579 --rc genhtml_function_coverage=1 00:32:26.579 --rc genhtml_legend=1 00:32:26.579 --rc geninfo_all_blocks=1 00:32:26.579 --rc geninfo_unexecuted_blocks=1 00:32:26.579 00:32:26.579 ' 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.579 12:09:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:26.579 12:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.728 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:34.729 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:34.729 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:34.729 Found net devices under 0000:31:00.0: cvl_0_0 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:34.729 Found net devices under 0000:31:00.1: cvl_0_1 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:32:34.729 00:32:34.729 --- 10.0.0.2 ping statistics --- 00:32:34.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.729 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:32:34.729 00:32:34.729 --- 10.0.0.1 ping statistics --- 00:32:34.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.729 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2169636 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2169636 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:34.729 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2169636 ']' 00:32:34.730 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.730 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:34.730 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.730 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:34.730 12:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.730 [2024-10-11 12:09:36.786013] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:34.730 [2024-10-11 12:09:36.787125] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:34.730 [2024-10-11 12:09:36.787175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.730 [2024-10-11 12:09:36.877884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:34.730 [2024-10-11 12:09:36.932923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.730 [2024-10-11 12:09:36.932974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.730 [2024-10-11 12:09:36.932983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.730 [2024-10-11 12:09:36.932993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.730 [2024-10-11 12:09:36.932999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.730 [2024-10-11 12:09:36.935494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.730 [2024-10-11 12:09:36.935655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:34.730 [2024-10-11 12:09:36.935816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.730 [2024-10-11 12:09:36.935817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:34.730 [2024-10-11 12:09:36.936175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.991 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:35.252 [2024-10-11 12:09:37.716345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:35.252 [2024-10-11 12:09:37.717082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:35.252 [2024-10-11 12:09:37.717113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:35.252 [2024-10-11 12:09:37.717275] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:35.252 [2024-10-11 12:09:37.728686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:35.252 Malloc0 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:35.252 [2024-10-11 12:09:37.800912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2169840 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2169842 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:35.252 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:35.252 { 00:32:35.252 "params": { 00:32:35.252 "name": "Nvme$subsystem", 00:32:35.252 "trtype": "$TEST_TRANSPORT", 00:32:35.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.253 "adrfam": "ipv4", 00:32:35.253 "trsvcid": "$NVMF_PORT", 00:32:35.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.253 "hdgst": ${hdgst:-false}, 00:32:35.253 "ddgst": ${ddgst:-false} 00:32:35.253 }, 00:32:35.253 "method": "bdev_nvme_attach_controller" 00:32:35.253 } 00:32:35.253 EOF 00:32:35.253 )") 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2169844 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2169847 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:35.253 { 00:32:35.253 "params": { 00:32:35.253 "name": "Nvme$subsystem", 00:32:35.253 "trtype": "$TEST_TRANSPORT", 00:32:35.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.253 "adrfam": "ipv4", 00:32:35.253 "trsvcid": "$NVMF_PORT", 00:32:35.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.253 "hdgst": ${hdgst:-false}, 00:32:35.253 "ddgst": ${ddgst:-false} 00:32:35.253 }, 00:32:35.253 "method": "bdev_nvme_attach_controller" 00:32:35.253 } 00:32:35.253 EOF 00:32:35.253 )") 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:35.253 { 00:32:35.253 "params": { 00:32:35.253 "name": "Nvme$subsystem", 00:32:35.253 "trtype": "$TEST_TRANSPORT", 00:32:35.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.253 "adrfam": "ipv4", 00:32:35.253 "trsvcid": "$NVMF_PORT", 00:32:35.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.253 "hdgst": ${hdgst:-false}, 00:32:35.253 "ddgst": ${ddgst:-false} 00:32:35.253 }, 00:32:35.253 "method": "bdev_nvme_attach_controller" 00:32:35.253 } 00:32:35.253 EOF 00:32:35.253 )") 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:35.253 { 00:32:35.253 "params": { 00:32:35.253 "name": "Nvme$subsystem", 00:32:35.253 "trtype": "$TEST_TRANSPORT", 00:32:35.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.253 "adrfam": "ipv4", 00:32:35.253 "trsvcid": "$NVMF_PORT", 00:32:35.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.253 "hdgst": ${hdgst:-false}, 00:32:35.253 "ddgst": ${ddgst:-false} 00:32:35.253 }, 00:32:35.253 "method": "bdev_nvme_attach_controller" 00:32:35.253 } 00:32:35.253 EOF 00:32:35.253 )") 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2169840 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:35.253 "params": { 00:32:35.253 "name": "Nvme1", 00:32:35.253 "trtype": "tcp", 00:32:35.253 "traddr": "10.0.0.2", 00:32:35.253 "adrfam": "ipv4", 00:32:35.253 "trsvcid": "4420", 00:32:35.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.253 "hdgst": false, 00:32:35.253 "ddgst": false 00:32:35.253 }, 00:32:35.253 "method": "bdev_nvme_attach_controller" 00:32:35.253 }' 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:35.253 "params": { 00:32:35.253 "name": "Nvme1", 00:32:35.253 "trtype": "tcp", 00:32:35.253 "traddr": "10.0.0.2", 00:32:35.253 "adrfam": "ipv4", 00:32:35.253 "trsvcid": "4420", 00:32:35.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.253 "hdgst": false, 00:32:35.253 "ddgst": false 00:32:35.253 }, 00:32:35.253 "method": "bdev_nvme_attach_controller" 00:32:35.253 }' 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:35.253 "params": { 00:32:35.253 "name": "Nvme1", 00:32:35.253 "trtype": "tcp", 00:32:35.253 "traddr": "10.0.0.2", 00:32:35.253 "adrfam": "ipv4", 00:32:35.253 "trsvcid": "4420", 00:32:35.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.253 "hdgst": false, 00:32:35.253 "ddgst": false 00:32:35.253 }, 00:32:35.253 "method": "bdev_nvme_attach_controller" 00:32:35.253 }' 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:35.253 12:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:35.253 "params": { 00:32:35.253 "name": "Nvme1", 00:32:35.253 "trtype": "tcp", 00:32:35.253 "traddr": "10.0.0.2", 00:32:35.253 "adrfam": "ipv4", 00:32:35.253 "trsvcid": "4420", 00:32:35.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:35.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:35.253 "hdgst": false, 00:32:35.253 "ddgst": false 00:32:35.253 }, 00:32:35.253 "method": "bdev_nvme_attach_controller" 00:32:35.253 }' 00:32:35.253 [2024-10-11 12:09:37.860622] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:35.253 [2024-10-11 12:09:37.860684] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:35.253 [2024-10-11 12:09:37.863151] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:35.253 [2024-10-11 12:09:37.863211] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:35.253 [2024-10-11 12:09:37.869033] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:35.253 [2024-10-11 12:09:37.869106] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:35.253 [2024-10-11 12:09:37.871069] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:35.253 [2024-10-11 12:09:37.871169] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:35.514 [2024-10-11 12:09:38.040790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.514 [2024-10-11 12:09:38.075037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:35.514 [2024-10-11 12:09:38.103193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.514 [2024-10-11 12:09:38.138402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:35.514 [2024-10-11 12:09:38.160424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.514 [2024-10-11 12:09:38.198662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:35.775 [2024-10-11 12:09:38.253321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.775 [2024-10-11 12:09:38.295892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:35.775 Running I/O for 1 seconds... 00:32:35.775 Running I/O for 1 seconds... 00:32:35.775 Running I/O for 1 seconds... 00:32:36.036 Running I/O for 1 seconds... 00:32:36.978 188608.00 IOPS, 736.75 MiB/s 00:32:36.978 Latency(us) 00:32:36.978 [2024-10-11T10:09:39.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.978 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:36.978 Nvme1n1 : 1.00 188227.87 735.27 0.00 0.00 676.00 298.67 1979.73 00:32:36.978 [2024-10-11T10:09:39.681Z] =================================================================================================================== 00:32:36.978 [2024-10-11T10:09:39.681Z] Total : 188227.87 735.27 0.00 0.00 676.00 298.67 1979.73 00:32:36.978 6957.00 IOPS, 27.18 MiB/s 00:32:36.978 Latency(us) 00:32:36.978 [2024-10-11T10:09:39.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.978 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:36.978 Nvme1n1 : 1.02 6967.70 27.22 0.00 0.00 18226.73 2211.84 30801.92 00:32:36.978 [2024-10-11T10:09:39.681Z] =================================================================================================================== 00:32:36.978 [2024-10-11T10:09:39.681Z] Total : 6967.70 27.22 0.00 0.00 18226.73 2211.84 30801.92 00:32:36.978 12018.00 IOPS, 46.95 MiB/s 00:32:36.978 Latency(us) 00:32:36.978 [2024-10-11T10:09:39.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.978 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:36.978 Nvme1n1 : 1.01 12076.33 47.17 0.00 0.00 10561.72 5406.72 16165.55 00:32:36.978 [2024-10-11T10:09:39.681Z] =================================================================================================================== 00:32:36.978 [2024-10-11T10:09:39.681Z] Total : 12076.33 47.17 0.00 0.00 10561.72 5406.72 16165.55 00:32:36.978 6723.00 IOPS, 26.26 MiB/s 00:32:36.978 Latency(us) 00:32:36.978 [2024-10-11T10:09:39.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.978 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:36.978 Nvme1n1 : 1.01 6814.06 26.62 0.00 0.00 18730.99 4205.23 36700.16 00:32:36.978 [2024-10-11T10:09:39.681Z] =================================================================================================================== 00:32:36.978 [2024-10-11T10:09:39.681Z] Total : 6814.06 26.62 0.00 0.00 18730.99 4205.23 36700.16 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2169842 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2169844 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2169847 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.978 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.978 rmmod nvme_tcp 00:32:37.239 rmmod nvme_fabrics 00:32:37.239 rmmod nvme_keyring 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2169636 ']' 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2169636 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2169636 ']' 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2169636 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2169636 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2169636' 00:32:37.239 killing process with pid 2169636 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2169636 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2169636 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:37.239 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:37.500 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:32:37.500 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:37.500 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:32:37.500 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.500 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.500 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.500 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.500 12:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.413 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.413 00:32:39.413 real 0m13.253s 00:32:39.413 user 0m15.622s 00:32:39.413 sys 0m7.805s 00:32:39.413 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:39.413 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:39.413 ************************************ 00:32:39.413 END TEST nvmf_bdev_io_wait 00:32:39.413 ************************************ 00:32:39.413 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:39.413 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:39.413 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:39.413 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.675 ************************************ 00:32:39.675 START TEST nvmf_queue_depth 00:32:39.675 ************************************ 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:39.675 * Looking for test storage... 00:32:39.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:39.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.675 --rc genhtml_branch_coverage=1 00:32:39.675 --rc genhtml_function_coverage=1 00:32:39.675 --rc genhtml_legend=1 00:32:39.675 --rc geninfo_all_blocks=1 00:32:39.675 --rc geninfo_unexecuted_blocks=1 00:32:39.675 00:32:39.675 ' 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:39.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.675 --rc genhtml_branch_coverage=1 00:32:39.675 --rc genhtml_function_coverage=1 00:32:39.675 --rc genhtml_legend=1 00:32:39.675 --rc geninfo_all_blocks=1 00:32:39.675 --rc geninfo_unexecuted_blocks=1 00:32:39.675 00:32:39.675 ' 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:39.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.675 --rc genhtml_branch_coverage=1 00:32:39.675 --rc genhtml_function_coverage=1 00:32:39.675 --rc genhtml_legend=1 00:32:39.675 --rc geninfo_all_blocks=1 00:32:39.675 --rc geninfo_unexecuted_blocks=1 00:32:39.675 00:32:39.675 ' 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:39.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.675 --rc genhtml_branch_coverage=1 00:32:39.675 --rc genhtml_function_coverage=1 00:32:39.675 --rc genhtml_legend=1 00:32:39.675 --rc geninfo_all_blocks=1 00:32:39.675 --rc geninfo_unexecuted_blocks=1 00:32:39.675 00:32:39.675 ' 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.675 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.676 12:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:47.814 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.814 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:47.815 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:47.815 Found net devices under 0000:31:00.0: cvl_0_0 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:47.815 Found net devices under 0000:31:00.1: cvl_0_1 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:32:47.815 00:32:47.815 --- 10.0.0.2 ping statistics --- 00:32:47.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.815 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:32:47.815 00:32:47.815 --- 10.0.0.1 ping statistics --- 00:32:47.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.815 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:47.815 12:09:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2174589 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2174589 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2174589 ']' 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.815 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:47.815 [2024-10-11 12:09:50.089401] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:47.815 [2024-10-11 12:09:50.090569] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:47.815 [2024-10-11 12:09:50.090620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.815 [2024-10-11 12:09:50.186016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.815 [2024-10-11 12:09:50.236884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.815 [2024-10-11 12:09:50.236943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.815 [2024-10-11 12:09:50.236951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.815 [2024-10-11 12:09:50.236958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.815 [2024-10-11 12:09:50.236964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.815 [2024-10-11 12:09:50.237789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.815 [2024-10-11 12:09:50.312951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:47.815 [2024-10-11 12:09:50.313242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.387 [2024-10-11 12:09:50.954667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.387 12:09:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.387 Malloc0 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.387 [2024-10-11 12:09:51.034781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2174635 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2174635 /var/tmp/bdevperf.sock 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2174635 ']' 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:48.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:48.387 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:48.648 [2024-10-11 12:09:51.100616] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:48.648 [2024-10-11 12:09:51.100685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174635 ] 00:32:48.648 [2024-10-11 12:09:51.185708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.648 [2024-10-11 12:09:51.239197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.221 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:49.221 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:49.221 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:49.221 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.221 12:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:49.481 NVMe0n1 00:32:49.481 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.481 12:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:49.481 Running I/O for 10 seconds... 00:32:51.806 9216.00 IOPS, 36.00 MiB/s [2024-10-11T10:09:55.549Z] 9227.00 IOPS, 36.04 MiB/s [2024-10-11T10:09:56.503Z] 9902.33 IOPS, 38.68 MiB/s [2024-10-11T10:09:57.445Z] 10752.25 IOPS, 42.00 MiB/s [2024-10-11T10:09:58.387Z] 11306.40 IOPS, 44.17 MiB/s [2024-10-11T10:09:59.328Z] 11777.17 IOPS, 46.00 MiB/s [2024-10-11T10:10:00.269Z] 12081.00 IOPS, 47.19 MiB/s [2024-10-11T10:10:01.211Z] 12333.12 IOPS, 48.18 MiB/s [2024-10-11T10:10:02.595Z] 12538.22 IOPS, 48.98 MiB/s [2024-10-11T10:10:02.595Z] 12714.70 IOPS, 49.67 MiB/s 00:32:59.892 Latency(us) 00:32:59.892 [2024-10-11T10:10:02.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.892 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:59.892 Verification LBA range: start 0x0 length 0x4000 00:32:59.892 NVMe0n1 : 10.06 12742.00 49.77 0.00 0.00 80107.71 24248.32 64225.28 00:32:59.892 [2024-10-11T10:10:02.595Z] =================================================================================================================== 00:32:59.892 [2024-10-11T10:10:02.595Z] Total : 12742.00 49.77 0.00 0.00 80107.71 24248.32 64225.28 00:32:59.892 { 00:32:59.892 "results": [ 00:32:59.892 { 00:32:59.892 "job": "NVMe0n1", 00:32:59.892 "core_mask": "0x1", 00:32:59.892 "workload": "verify", 00:32:59.892 "status": "finished", 00:32:59.892 "verify_range": { 00:32:59.892 "start": 0, 00:32:59.892 "length": 16384 00:32:59.892 }, 00:32:59.892 "queue_depth": 1024, 00:32:59.892 "io_size": 4096, 00:32:59.892 "runtime": 10.058938, 00:32:59.892 "iops": 12742.001193366536, 00:32:59.892 "mibps": 49.77344216158803, 00:32:59.892 "io_failed": 0, 00:32:59.892 "io_timeout": 0, 00:32:59.892 "avg_latency_us": 80107.70921388874, 00:32:59.892 "min_latency_us": 24248.32, 00:32:59.892 "max_latency_us": 64225.28 00:32:59.892 } 00:32:59.892 ], 00:32:59.892 "core_count": 1 00:32:59.892 } 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2174635 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2174635 ']' 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2174635 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2174635 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2174635' 00:32:59.892 killing process with pid 2174635 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2174635 00:32:59.892 Received shutdown signal, test time was about 10.000000 seconds 00:32:59.892 00:32:59.892 Latency(us) 00:32:59.892 [2024-10-11T10:10:02.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.892 [2024-10-11T10:10:02.595Z] =================================================================================================================== 00:32:59.892 [2024-10-11T10:10:02.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2174635 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.892 rmmod nvme_tcp 00:32:59.892 rmmod nvme_fabrics 00:32:59.892 rmmod nvme_keyring 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2174589 ']' 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2174589 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2174589 ']' 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2174589 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2174589 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2174589' 00:32:59.892 killing process with pid 2174589 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2174589 00:32:59.892 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2174589 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.152 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.153 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.153 12:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.062 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.062 00:33:02.062 real 0m22.641s 00:33:02.062 user 0m24.904s 00:33:02.062 sys 0m7.361s 00:33:02.062 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:02.062 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:02.062 ************************************ 00:33:02.062 END TEST nvmf_queue_depth 00:33:02.062 ************************************ 00:33:02.322 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:02.322 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:02.322 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:02.322 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:02.322 ************************************ 00:33:02.322 START TEST nvmf_target_multipath 00:33:02.322 ************************************ 00:33:02.322 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:02.322 * Looking for test storage... 00:33:02.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.322 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:02.322 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:02.322 12:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:02.322 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:02.322 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.322 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.322 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.323 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:02.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.584 --rc genhtml_branch_coverage=1 00:33:02.584 --rc genhtml_function_coverage=1 00:33:02.584 --rc genhtml_legend=1 00:33:02.584 --rc geninfo_all_blocks=1 00:33:02.584 --rc geninfo_unexecuted_blocks=1 00:33:02.584 00:33:02.584 ' 00:33:02.584 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:02.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.584 --rc genhtml_branch_coverage=1 00:33:02.584 --rc genhtml_function_coverage=1 00:33:02.584 --rc genhtml_legend=1 00:33:02.584 --rc geninfo_all_blocks=1 00:33:02.585 --rc geninfo_unexecuted_blocks=1 00:33:02.585 00:33:02.585 ' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:02.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.585 --rc genhtml_branch_coverage=1 00:33:02.585 --rc genhtml_function_coverage=1 00:33:02.585 --rc genhtml_legend=1 00:33:02.585 --rc geninfo_all_blocks=1 00:33:02.585 --rc geninfo_unexecuted_blocks=1 00:33:02.585 00:33:02.585 ' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:02.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.585 --rc genhtml_branch_coverage=1 00:33:02.585 --rc genhtml_function_coverage=1 00:33:02.585 --rc genhtml_legend=1 00:33:02.585 --rc geninfo_all_blocks=1 00:33:02.585 --rc geninfo_unexecuted_blocks=1 00:33:02.585 00:33:02.585 ' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.585 12:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:10.724 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.724 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.724 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.724 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.724 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.724 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.724 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.724 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:10.725 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:10.725 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:10.725 Found net devices under 0000:31:00.0: cvl_0_0 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:10.725 Found net devices under 0000:31:00.1: cvl_0_1 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.725 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.752 ms 00:33:10.726 00:33:10.726 --- 10.0.0.2 ping statistics --- 00:33:10.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.726 rtt min/avg/max/mdev = 0.752/0.752/0.752/0.000 ms 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:33:10.726 00:33:10.726 --- 10.0.0.1 ping statistics --- 00:33:10.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.726 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:10.726 only one NIC for nvmf test 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.726 rmmod nvme_tcp 00:33:10.726 rmmod nvme_fabrics 00:33:10.726 rmmod nvme_keyring 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.726 12:10:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:12.639 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.640 00:33:12.640 real 0m10.139s 00:33:12.640 user 0m2.257s 00:33:12.640 sys 0m5.823s 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:12.640 12:10:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:12.640 ************************************ 00:33:12.640 END TEST nvmf_target_multipath 00:33:12.640 ************************************ 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.640 ************************************ 00:33:12.640 START TEST nvmf_zcopy 00:33:12.640 ************************************ 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:12.640 * Looking for test storage... 00:33:12.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:12.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.640 --rc genhtml_branch_coverage=1 00:33:12.640 --rc genhtml_function_coverage=1 00:33:12.640 --rc genhtml_legend=1 00:33:12.640 --rc geninfo_all_blocks=1 00:33:12.640 --rc geninfo_unexecuted_blocks=1 00:33:12.640 00:33:12.640 ' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:12.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.640 --rc genhtml_branch_coverage=1 00:33:12.640 --rc genhtml_function_coverage=1 00:33:12.640 --rc genhtml_legend=1 00:33:12.640 --rc geninfo_all_blocks=1 00:33:12.640 --rc geninfo_unexecuted_blocks=1 00:33:12.640 00:33:12.640 ' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:12.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.640 --rc genhtml_branch_coverage=1 00:33:12.640 --rc genhtml_function_coverage=1 00:33:12.640 --rc genhtml_legend=1 00:33:12.640 --rc geninfo_all_blocks=1 00:33:12.640 --rc geninfo_unexecuted_blocks=1 00:33:12.640 00:33:12.640 ' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:12.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.640 --rc genhtml_branch_coverage=1 00:33:12.640 --rc genhtml_function_coverage=1 00:33:12.640 --rc genhtml_legend=1 00:33:12.640 --rc geninfo_all_blocks=1 00:33:12.640 --rc geninfo_unexecuted_blocks=1 00:33:12.640 00:33:12.640 ' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.640 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.641 12:10:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:20.780 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.780 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:20.781 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:20.781 Found net devices under 0000:31:00.0: cvl_0_0 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:20.781 Found net devices under 0000:31:00.1: cvl_0_1 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:20.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:33:20.781 00:33:20.781 --- 10.0.0.2 ping statistics --- 00:33:20.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.781 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:33:20.781 00:33:20.781 --- 10.0.0.1 ping statistics --- 00:33:20.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.781 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2185426 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2185426 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2185426 ']' 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:20.781 12:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:20.781 [2024-10-11 12:10:23.051895] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:20.781 [2024-10-11 12:10:23.052987] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:33:20.781 [2024-10-11 12:10:23.053034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.781 [2024-10-11 12:10:23.143470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.781 [2024-10-11 12:10:23.193350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.781 [2024-10-11 12:10:23.193398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.781 [2024-10-11 12:10:23.193406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.781 [2024-10-11 12:10:23.193413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.781 [2024-10-11 12:10:23.193419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.781 [2024-10-11 12:10:23.194181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.781 [2024-10-11 12:10:23.269021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:20.781 [2024-10-11 12:10:23.269319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:21.353 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:21.353 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:33:21.353 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:21.353 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:21.353 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.353 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.353 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:21.353 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.354 [2024-10-11 12:10:23.931025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.354 [2024-10-11 12:10:23.959354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.354 malloc0 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.354 12:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:21.354 { 00:33:21.354 "params": { 00:33:21.354 "name": "Nvme$subsystem", 00:33:21.354 "trtype": "$TEST_TRANSPORT", 00:33:21.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.354 "adrfam": "ipv4", 00:33:21.354 "trsvcid": "$NVMF_PORT", 00:33:21.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.354 "hdgst": ${hdgst:-false}, 00:33:21.354 "ddgst": ${ddgst:-false} 00:33:21.354 }, 00:33:21.354 "method": "bdev_nvme_attach_controller" 00:33:21.354 } 00:33:21.354 EOF 00:33:21.354 )") 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:21.354 12:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:21.354 "params": { 00:33:21.354 "name": "Nvme1", 00:33:21.354 "trtype": "tcp", 00:33:21.354 "traddr": "10.0.0.2", 00:33:21.354 "adrfam": "ipv4", 00:33:21.354 "trsvcid": "4420", 00:33:21.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:21.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:21.354 "hdgst": false, 00:33:21.354 "ddgst": false 00:33:21.354 }, 00:33:21.354 "method": "bdev_nvme_attach_controller" 00:33:21.354 }' 00:33:21.615 [2024-10-11 12:10:24.065133] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:33:21.615 [2024-10-11 12:10:24.065198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185644 ] 00:33:21.615 [2024-10-11 12:10:24.148403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.615 [2024-10-11 12:10:24.201489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.875 Running I/O for 10 seconds... 00:33:24.201 6325.00 IOPS, 49.41 MiB/s [2024-10-11T10:10:27.846Z] 6365.50 IOPS, 49.73 MiB/s [2024-10-11T10:10:28.802Z] 6385.67 IOPS, 49.89 MiB/s [2024-10-11T10:10:29.744Z] 6398.00 IOPS, 49.98 MiB/s [2024-10-11T10:10:30.686Z] 6870.60 IOPS, 53.68 MiB/s [2024-10-11T10:10:31.628Z] 7298.00 IOPS, 57.02 MiB/s [2024-10-11T10:10:32.568Z] 7616.29 IOPS, 59.50 MiB/s [2024-10-11T10:10:33.951Z] 7856.00 IOPS, 61.38 MiB/s [2024-10-11T10:10:34.520Z] 8043.44 IOPS, 62.84 MiB/s [2024-10-11T10:10:34.781Z] 8192.30 IOPS, 64.00 MiB/s 00:33:32.078 Latency(us) 00:33:32.078 [2024-10-11T10:10:34.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.078 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:32.078 Verification LBA range: start 0x0 length 0x1000 00:33:32.078 Nvme1n1 : 10.01 8197.11 64.04 0.00 0.00 15568.50 1993.39 28398.93 00:33:32.078 [2024-10-11T10:10:34.781Z] =================================================================================================================== 00:33:32.078 [2024-10-11T10:10:34.781Z] Total : 8197.11 64.04 0.00 0.00 15568.50 1993.39 28398.93 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2187597 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:32.078 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:32.078 { 00:33:32.078 "params": { 00:33:32.078 "name": "Nvme$subsystem", 00:33:32.078 "trtype": "$TEST_TRANSPORT", 00:33:32.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:32.078 "adrfam": "ipv4", 00:33:32.078 "trsvcid": "$NVMF_PORT", 00:33:32.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:32.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:32.079 "hdgst": ${hdgst:-false}, 00:33:32.079 "ddgst": ${ddgst:-false} 00:33:32.079 }, 00:33:32.079 "method": "bdev_nvme_attach_controller" 00:33:32.079 } 00:33:32.079 EOF 00:33:32.079 )") 00:33:32.079 [2024-10-11 12:10:34.626581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.626610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:32.079 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:32.079 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:32.079 12:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:32.079 "params": { 00:33:32.079 "name": "Nvme1", 00:33:32.079 "trtype": "tcp", 00:33:32.079 "traddr": "10.0.0.2", 00:33:32.079 "adrfam": "ipv4", 00:33:32.079 "trsvcid": "4420", 00:33:32.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:32.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:32.079 "hdgst": false, 00:33:32.079 "ddgst": false 00:33:32.079 }, 00:33:32.079 "method": "bdev_nvme_attach_controller" 00:33:32.079 }' 00:33:32.079 [2024-10-11 12:10:34.638549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.638564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.650547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.650557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.662547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.662556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.671801] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:33:32.079 [2024-10-11 12:10:34.671850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187597 ] 00:33:32.079 [2024-10-11 12:10:34.674547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.674557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.686546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.686556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.698547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.698555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.710546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.710554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.722546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.722554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.734546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.734554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.746546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.746555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.747188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.079 [2024-10-11 12:10:34.758548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.758558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.770547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.079 [2024-10-11 12:10:34.770558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.079 [2024-10-11 12:10:34.776704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.339 [2024-10-11 12:10:34.782548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.782558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.794552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.794565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.806551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.806563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.818549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.818559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.830547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.830561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.842554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.842568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.854551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.854562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.866550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.866560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.878549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.339 [2024-10-11 12:10:34.878560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.339 [2024-10-11 12:10:34.890549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:34.890560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.340 [2024-10-11 12:10:34.939361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:34.939376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.340 Running I/O for 5 seconds... 00:33:32.340 [2024-10-11 12:10:34.950550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:34.950564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.340 [2024-10-11 12:10:34.965211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:34.965231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.340 [2024-10-11 12:10:34.979667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:34.979686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.340 [2024-10-11 12:10:34.993901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:34.993919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.340 [2024-10-11 12:10:35.007030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:35.007047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.340 [2024-10-11 12:10:35.022230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:35.022247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.340 [2024-10-11 12:10:35.034868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.340 [2024-10-11 12:10:35.034884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.050689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.050709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.061384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.061401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.075073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.075089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.090015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.090032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.103254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.103272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.118007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.118033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.130254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.130270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.142978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.142994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.157665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.157682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.170946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.170963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.186186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.186203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.199673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.199689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.214663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.214679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.226219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.226235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.239609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.239624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.254113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.254129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.267596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.267612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.281878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.281894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.600 [2024-10-11 12:10:35.294991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.600 [2024-10-11 12:10:35.295007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.310158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.310174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.322617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.322633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.333320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.333335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.347145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.347161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.362626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.362643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.374967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.374982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.390171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.390188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.403433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.403451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.860 [2024-10-11 12:10:35.418151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.860 [2024-10-11 12:10:35.418168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.431530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.431546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.446025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.446041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.459123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.459139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.474621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.474639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.487446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.487462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.501993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.502011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.514576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.514593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.527338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.527354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.542114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.542131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:32.861 [2024-10-11 12:10:35.555156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:32.861 [2024-10-11 12:10:35.555173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.569953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.569970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.583184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.583200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.598221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.598237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.611737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.611754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.626152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.626168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.638798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.638814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.651418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.651434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.666096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.666112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.679321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.679338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.693698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.693715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.707455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.707471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.721937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.721953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.735289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.735305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.750168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.750186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.762935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.762951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.777617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.777634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.791488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.791504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.805727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.805744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.121 [2024-10-11 12:10:35.819585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.121 [2024-10-11 12:10:35.819602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.833903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.833920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.847342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.847357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.861847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.861863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.875378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.875394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.889243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.889258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.903046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.903067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.917708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.917724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.931269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.931284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.945831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.945847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 17818.00 IOPS, 139.20 MiB/s [2024-10-11T10:10:36.097Z] [2024-10-11 12:10:35.959619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.959635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.974073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.974088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:35.987581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:35.987597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:36.002010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:36.002026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:36.014949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:36.014964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:36.030435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:36.030451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:36.043327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:36.043343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:36.057968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:36.057985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:36.071110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:36.071127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.394 [2024-10-11 12:10:36.085957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.394 [2024-10-11 12:10:36.085974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.654 [2024-10-11 12:10:36.098900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.654 [2024-10-11 12:10:36.098916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.114379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.114395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.127846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.127863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.142140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.142157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.154731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.154754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.170221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.170238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.183311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.183327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.197927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.197944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.211469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.211486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.226432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.226449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.239379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.239395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.253718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.253736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.266938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.266955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.282124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.282141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.295614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.295631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.310106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.310123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.323505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.323522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.338314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.338331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.655 [2024-10-11 12:10:36.351046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.655 [2024-10-11 12:10:36.351068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.366409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.366427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.378616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.378633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.392337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.392354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.406643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.406660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.418826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.418848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.434007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.434024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.447007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.447023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.462121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.462139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.475069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.475086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.490056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.490079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.503183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.503199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.518048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.518071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.531411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.531429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.546196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.546213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.559466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.559482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.574785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.574802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.585318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.585334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.599346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.599363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:33.915 [2024-10-11 12:10:36.613909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:33.915 [2024-10-11 12:10:36.613926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.627612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.627630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.641910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.641926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.655680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.655697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.669656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.669672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.683091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.683117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.698389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.698406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.710452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.710468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.724314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.724330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.737821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.737839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.751797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.751815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.765993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.766011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.779842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.779858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.794477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.794494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.806387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.806404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.819379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.819396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.833998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.834014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.847336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.847360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.862356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.862372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.176 [2024-10-11 12:10:36.875038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.176 [2024-10-11 12:10:36.875053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:36.890020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:36.890036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:36.903172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:36.903190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:36.918198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:36.918215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:36.930720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:36.930736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:36.945846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:36.945862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 17813.50 IOPS, 139.17 MiB/s [2024-10-11T10:10:37.140Z] [2024-10-11 12:10:36.958907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:36.958923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:36.973693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:36.973711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:36.987308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:36.987324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:37.001812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:37.001828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:37.014564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:37.014581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.437 [2024-10-11 12:10:37.026952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.437 [2024-10-11 12:10:37.026967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.438 [2024-10-11 12:10:37.041600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.438 [2024-10-11 12:10:37.041616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.438 [2024-10-11 12:10:37.054944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.438 [2024-10-11 12:10:37.054959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.438 [2024-10-11 12:10:37.067229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.438 [2024-10-11 12:10:37.067244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.438 [2024-10-11 12:10:37.082138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.438 [2024-10-11 12:10:37.082154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.438 [2024-10-11 12:10:37.095558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.438 [2024-10-11 12:10:37.095574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.438 [2024-10-11 12:10:37.110414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.438 [2024-10-11 12:10:37.110429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.438 [2024-10-11 12:10:37.122556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.438 [2024-10-11 12:10:37.122571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.438 [2024-10-11 12:10:37.135086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.438 [2024-10-11 12:10:37.135101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.150067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.150084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.163840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.163857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.177726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.177742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.191391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.191407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.206749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.206766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.221457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.221474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.234807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.234822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.250080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.250097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.263597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.263613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.277476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.277491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.291394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.291408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.306212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.306228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.319125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.319140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.333806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.333822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.347077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.347092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.361901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.361917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.375223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.375241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.698 [2024-10-11 12:10:37.389805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.698 [2024-10-11 12:10:37.389822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.403732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.403749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.417449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.417465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.431635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.431650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.445672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.445688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.459152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.459167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.474214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.474230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.486921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.486936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.501837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.501853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.515091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.515106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.530247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.530263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.544561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.544576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.558177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.558194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.571387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.571403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.585937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.585953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.599642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.599658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.613958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.613975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.627118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.627134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.642022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.642038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:34.959 [2024-10-11 12:10:37.654820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:34.959 [2024-10-11 12:10:37.654836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.670529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.670547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.681018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.681035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.694853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.694869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.709551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.709568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.723194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.723220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.737826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.737843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.751284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.751301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.766258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.766275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.778970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.778986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.794123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.794140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.807484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.807500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.822361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.822378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.834440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.834456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.847247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.847263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.861978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.861995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.875146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.875163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.889524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.889540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.903004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.903021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.220 [2024-10-11 12:10:37.918374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.220 [2024-10-11 12:10:37.918393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:37.930673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:37.930690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:37.945980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:37.945997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 17780.00 IOPS, 138.91 MiB/s [2024-10-11T10:10:38.183Z] [2024-10-11 12:10:37.959668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:37.959685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:37.973981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:37.973997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:37.987693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:37.987715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.002159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.002176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.014421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.014437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.027096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.027112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.042336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.042352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.055401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.055418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.069823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.069840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.082814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.082830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.098499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.098516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.110836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.110852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.126146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.126164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.138691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.138707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.480 [2024-10-11 12:10:38.154213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.480 [2024-10-11 12:10:38.154229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.481 [2024-10-11 12:10:38.166976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.481 [2024-10-11 12:10:38.166993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.481 [2024-10-11 12:10:38.181912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.481 [2024-10-11 12:10:38.181928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.195686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.195705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.210348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.210365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.223000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.223016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.238032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.238049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.250746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.250774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.262310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.262327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.275044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.275059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.289416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.289431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.303045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.303061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.318135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.318152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.331877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.331894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.346647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.346664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.359035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.359051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.374370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.374386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.386394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.386411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.399365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.399382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.414264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.414281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.427526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.427543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:35.741 [2024-10-11 12:10:38.442099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:35.741 [2024-10-11 12:10:38.442115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.454884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.454900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.470050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.470073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.482542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.482558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.494369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.494386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.507793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.507810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.522047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.522068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.535712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.535729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.550053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.550074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.562883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.562899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.578116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.578134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.591705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.591721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.606127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.606143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.619597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.619613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.633686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.633702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.646840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.646855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.662333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.662349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.674709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.674725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.686658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.686674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.001 [2024-10-11 12:10:38.699613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.001 [2024-10-11 12:10:38.699628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.713932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.713948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.727641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.727658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.741872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.741889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.755443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.755459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.770362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.770379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.783211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.783226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.797905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.797921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.811446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.811462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.826244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.826261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.839488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.839504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.853429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.853445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.867216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.867233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.881909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.881925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.895178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.895194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.910170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.910187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.922959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.922975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.937969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.937985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.262 [2024-10-11 12:10:38.951647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.262 [2024-10-11 12:10:38.951664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 17788.25 IOPS, 138.97 MiB/s [2024-10-11T10:10:39.225Z] [2024-10-11 12:10:38.966340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-10-11 12:10:38.966356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 [2024-10-11 12:10:38.977073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-10-11 12:10:38.977089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.522 [2024-10-11 12:10:38.991778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.522 [2024-10-11 12:10:38.991794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.005701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.005716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.019520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.019543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.033871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.033887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.047126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.047142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.062539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.062556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.074726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.074742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.087518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.087534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.101984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.102000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.115842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.115859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.129923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.129940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.143481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.143497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.158099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.158114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.171647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.171663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.186376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.186392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.198305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.198321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.211853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.211869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.523 [2024-10-11 12:10:39.225904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.523 [2024-10-11 12:10:39.225920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.783 [2024-10-11 12:10:39.239164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.239188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.254096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.254112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.266342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.266359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.280019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.280044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.294484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.294502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.306570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.306586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.319555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.319571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.333840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.333857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.347400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.347416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.361973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.361990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.374981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.374996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.390161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.390178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.402314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.402331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.416536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.416552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.429870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.429885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.443643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.443659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.457825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.457841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.470824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.470839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:36.784 [2024-10-11 12:10:39.485615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:36.784 [2024-10-11 12:10:39.485631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.044 [2024-10-11 12:10:39.499125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.044 [2024-10-11 12:10:39.499141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.044 [2024-10-11 12:10:39.513959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.044 [2024-10-11 12:10:39.513975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.044 [2024-10-11 12:10:39.527179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.044 [2024-10-11 12:10:39.527195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.542102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.542126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.554962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.554978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.569815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.569831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.583270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.583286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.597332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.597349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.611161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.611178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.626148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.626165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.639524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.639541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.654055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.654077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.666869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.666885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.681996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.682013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.695769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.695785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.709787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.709803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.723373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.723389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.045 [2024-10-11 12:10:39.737920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.045 [2024-10-11 12:10:39.737936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.751405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.751423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.766542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.766559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.778399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.778416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.791554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.791570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.805848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.805876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.819313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.819329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.834597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.834613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.847325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.847341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.861771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.861787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.875648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.875665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.889928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.889945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.902938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.902954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.918198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.918215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.930651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.930669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.941596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.941612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 [2024-10-11 12:10:39.955313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.955330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 17793.80 IOPS, 139.01 MiB/s [2024-10-11T10:10:40.008Z] [2024-10-11 12:10:39.969231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.305 [2024-10-11 12:10:39.969248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.305 00:33:37.305 Latency(us) 00:33:37.305 [2024-10-11T10:10:40.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.305 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:37.305 Nvme1n1 : 5.01 17793.52 139.01 0.00 0.00 7186.52 2102.61 11851.09 00:33:37.305 [2024-10-11T10:10:40.008Z] =================================================================================================================== 00:33:37.306 [2024-10-11T10:10:40.009Z] Total : 17793.52 139.01 0.00 0.00 7186.52 2102.61 11851.09 00:33:37.306 [2024-10-11 12:10:39.978551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.306 [2024-10-11 12:10:39.978565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.306 [2024-10-11 12:10:39.990558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.306 [2024-10-11 12:10:39.990570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.306 [2024-10-11 12:10:40.002593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.306 [2024-10-11 12:10:40.002620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.566 [2024-10-11 12:10:40.014589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.566 [2024-10-11 12:10:40.014614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.566 [2024-10-11 12:10:40.026555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.566 [2024-10-11 12:10:40.026566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.566 [2024-10-11 12:10:40.038598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.566 [2024-10-11 12:10:40.038626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.566 [2024-10-11 12:10:40.050552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.566 [2024-10-11 12:10:40.050562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.567 [2024-10-11 12:10:40.062550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.567 [2024-10-11 12:10:40.062560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.567 [2024-10-11 12:10:40.074548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.567 [2024-10-11 12:10:40.074558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2187597) - No such process 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2187597 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:37.567 delay0 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.567 12:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:37.567 [2024-10-11 12:10:40.267256] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:45.713 Initializing NVMe Controllers 00:33:45.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:45.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:45.713 Initialization complete. Launching workers. 00:33:45.713 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 9987 00:33:45.713 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10249, failed to submit 58 00:33:45.713 success 10073, unsuccessful 176, failed 0 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:45.713 rmmod nvme_tcp 00:33:45.713 rmmod nvme_fabrics 00:33:45.713 rmmod nvme_keyring 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2185426 ']' 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2185426 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2185426 ']' 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2185426 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2185426 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2185426' 00:33:45.713 killing process with pid 2185426 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2185426 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2185426 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.713 12:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.097 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:47.358 00:33:47.358 real 0m34.746s 00:33:47.358 user 0m43.499s 00:33:47.358 sys 0m13.126s 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:47.358 ************************************ 00:33:47.358 END TEST nvmf_zcopy 00:33:47.358 ************************************ 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:47.358 ************************************ 00:33:47.358 START TEST nvmf_nmic 00:33:47.358 ************************************ 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:47.358 * Looking for test storage... 00:33:47.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:47.358 12:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:47.619 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:47.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.620 --rc genhtml_branch_coverage=1 00:33:47.620 --rc genhtml_function_coverage=1 00:33:47.620 --rc genhtml_legend=1 00:33:47.620 --rc geninfo_all_blocks=1 00:33:47.620 --rc geninfo_unexecuted_blocks=1 00:33:47.620 00:33:47.620 ' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:47.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.620 --rc genhtml_branch_coverage=1 00:33:47.620 --rc genhtml_function_coverage=1 00:33:47.620 --rc genhtml_legend=1 00:33:47.620 --rc geninfo_all_blocks=1 00:33:47.620 --rc geninfo_unexecuted_blocks=1 00:33:47.620 00:33:47.620 ' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:47.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.620 --rc genhtml_branch_coverage=1 00:33:47.620 --rc genhtml_function_coverage=1 00:33:47.620 --rc genhtml_legend=1 00:33:47.620 --rc geninfo_all_blocks=1 00:33:47.620 --rc geninfo_unexecuted_blocks=1 00:33:47.620 00:33:47.620 ' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:47.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.620 --rc genhtml_branch_coverage=1 00:33:47.620 --rc genhtml_function_coverage=1 00:33:47.620 --rc genhtml_legend=1 00:33:47.620 --rc geninfo_all_blocks=1 00:33:47.620 --rc geninfo_unexecuted_blocks=1 00:33:47.620 00:33:47.620 ' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:47.620 12:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:55.896 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:55.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:55.897 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:55.897 Found net devices under 0000:31:00.0: cvl_0_0 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:55.897 Found net devices under 0000:31:00.1: cvl_0_1 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:55.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:33:55.897 00:33:55.897 --- 10.0.0.2 ping statistics --- 00:33:55.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.897 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:55.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:33:55.897 00:33:55.897 --- 10.0.0.1 ping statistics --- 00:33:55.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.897 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:55.897 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2194243 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2194243 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2194243 ']' 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:55.898 12:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:55.898 [2024-10-11 12:10:57.842826] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:55.898 [2024-10-11 12:10:57.843946] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:33:55.898 [2024-10-11 12:10:57.843997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.898 [2024-10-11 12:10:57.934351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:55.898 [2024-10-11 12:10:57.989438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.898 [2024-10-11 12:10:57.989492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.898 [2024-10-11 12:10:57.989500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.898 [2024-10-11 12:10:57.989508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.898 [2024-10-11 12:10:57.989514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.898 [2024-10-11 12:10:57.991647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.898 [2024-10-11 12:10:57.991809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.898 [2024-10-11 12:10:57.991965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:55.898 [2024-10-11 12:10:57.991966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.898 [2024-10-11 12:10:58.069350] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:55.898 [2024-10-11 12:10:58.070360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:55.898 [2024-10-11 12:10:58.070541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:55.898 [2024-10-11 12:10:58.070958] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:55.898 [2024-10-11 12:10:58.071002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 [2024-10-11 12:10:58.700972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 Malloc0 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 [2024-10-11 12:10:58.793289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:56.159 test case1: single bdev can't be used in multiple subsystems 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 [2024-10-11 12:10:58.828582] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:56.159 [2024-10-11 12:10:58.828606] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:56.159 [2024-10-11 12:10:58.828615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.159 request: 00:33:56.159 { 00:33:56.159 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:56.159 "namespace": { 00:33:56.159 "bdev_name": "Malloc0", 00:33:56.159 "no_auto_visible": false 00:33:56.159 }, 00:33:56.159 "method": "nvmf_subsystem_add_ns", 00:33:56.159 "req_id": 1 00:33:56.159 } 00:33:56.159 Got JSON-RPC error response 00:33:56.159 response: 00:33:56.159 { 00:33:56.159 "code": -32602, 00:33:56.159 "message": "Invalid parameters" 00:33:56.159 } 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:56.159 Adding namespace failed - expected result. 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:56.159 test case2: host connect to nvmf target in multiple paths 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:56.159 [2024-10-11 12:10:58.840716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.159 12:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:56.730 12:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:57.302 12:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:57.302 12:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:33:57.302 12:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:57.302 12:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:57.302 12:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:33:59.214 12:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:59.214 12:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:59.214 12:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:59.214 12:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:59.214 12:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:59.214 12:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:33:59.214 12:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:59.214 [global] 00:33:59.214 thread=1 00:33:59.214 invalidate=1 00:33:59.214 rw=write 00:33:59.214 time_based=1 00:33:59.214 runtime=1 00:33:59.214 ioengine=libaio 00:33:59.214 direct=1 00:33:59.214 bs=4096 00:33:59.214 iodepth=1 00:33:59.214 norandommap=0 00:33:59.214 numjobs=1 00:33:59.214 00:33:59.214 verify_dump=1 00:33:59.214 verify_backlog=512 00:33:59.214 verify_state_save=0 00:33:59.214 do_verify=1 00:33:59.214 verify=crc32c-intel 00:33:59.214 [job0] 00:33:59.214 filename=/dev/nvme0n1 00:33:59.214 Could not set queue depth (nvme0n1) 00:33:59.782 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.782 fio-3.35 00:33:59.782 Starting 1 thread 00:34:00.722 00:34:00.722 job0: (groupid=0, jobs=1): err= 0: pid=2195388: Fri Oct 11 12:11:03 2024 00:34:00.722 read: IOPS=17, BW=70.1KiB/s (71.8kB/s)(72.0KiB/1027msec) 00:34:00.722 slat (nsec): min=25752, max=30844, avg=26447.28, stdev=1140.88 00:34:00.722 clat (usec): min=626, max=42025, avg=39611.61, stdev=9732.35 00:34:00.722 lat (usec): min=652, max=42051, avg=39638.06, stdev=9732.42 00:34:00.722 clat percentiles (usec): 00:34:00.722 | 1.00th=[ 627], 5.00th=[ 627], 10.00th=[41157], 20.00th=[41681], 00:34:00.722 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:00.722 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:00.722 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:00.722 | 99.99th=[42206] 00:34:00.722 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:34:00.722 slat (usec): min=10, max=30944, avg=92.66, stdev=1366.14 00:34:00.722 clat (usec): min=163, max=758, avg=512.31, stdev=101.62 00:34:00.722 lat (usec): min=173, max=31702, avg=604.97, stdev=1380.98 00:34:00.722 clat percentiles (usec): 00:34:00.722 | 1.00th=[ 253], 5.00th=[ 334], 10.00th=[ 379], 20.00th=[ 429], 00:34:00.722 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 506], 60.00th=[ 537], 00:34:00.722 | 70.00th=[ 586], 80.00th=[ 611], 90.00th=[ 635], 95.00th=[ 660], 00:34:00.722 | 99.00th=[ 701], 99.50th=[ 725], 99.90th=[ 758], 99.95th=[ 758], 00:34:00.722 | 99.99th=[ 758] 00:34:00.722 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:00.722 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:00.722 lat (usec) : 250=0.94%, 500=44.15%, 750=51.51%, 1000=0.19% 00:34:00.722 lat (msec) : 50=3.21% 00:34:00.722 cpu : usr=0.88%, sys=1.46%, ctx=533, majf=0, minf=1 00:34:00.722 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:00.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.722 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.722 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:00.722 00:34:00.722 Run status group 0 (all jobs): 00:34:00.722 READ: bw=70.1KiB/s (71.8kB/s), 70.1KiB/s-70.1KiB/s (71.8kB/s-71.8kB/s), io=72.0KiB (73.7kB), run=1027-1027msec 00:34:00.722 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:34:00.722 00:34:00.722 Disk stats (read/write): 00:34:00.722 nvme0n1: ios=39/512, merge=0/0, ticks=1517/233, in_queue=1750, util=98.80% 00:34:00.722 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:00.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.983 rmmod nvme_tcp 00:34:00.983 rmmod nvme_fabrics 00:34:00.983 rmmod nvme_keyring 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2194243 ']' 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2194243 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2194243 ']' 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2194243 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2194243 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2194243' 00:34:00.983 killing process with pid 2194243 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2194243 00:34:00.983 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2194243 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.243 12:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.787 12:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:03.787 00:34:03.787 real 0m15.994s 00:34:03.787 user 0m35.779s 00:34:03.787 sys 0m7.531s 00:34:03.787 12:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:03.787 12:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:03.787 ************************************ 00:34:03.787 END TEST nvmf_nmic 00:34:03.787 ************************************ 00:34:03.787 12:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:03.787 12:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:03.787 12:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:03.787 12:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:03.787 ************************************ 00:34:03.787 START TEST nvmf_fio_target 00:34:03.787 ************************************ 00:34:03.787 12:11:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:03.787 * Looking for test storage... 00:34:03.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:03.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.787 --rc genhtml_branch_coverage=1 00:34:03.787 --rc genhtml_function_coverage=1 00:34:03.787 --rc genhtml_legend=1 00:34:03.787 --rc geninfo_all_blocks=1 00:34:03.787 --rc geninfo_unexecuted_blocks=1 00:34:03.787 00:34:03.787 ' 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:03.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.787 --rc genhtml_branch_coverage=1 00:34:03.787 --rc genhtml_function_coverage=1 00:34:03.787 --rc genhtml_legend=1 00:34:03.787 --rc geninfo_all_blocks=1 00:34:03.787 --rc geninfo_unexecuted_blocks=1 00:34:03.787 00:34:03.787 ' 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:03.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.787 --rc genhtml_branch_coverage=1 00:34:03.787 --rc genhtml_function_coverage=1 00:34:03.787 --rc genhtml_legend=1 00:34:03.787 --rc geninfo_all_blocks=1 00:34:03.787 --rc geninfo_unexecuted_blocks=1 00:34:03.787 00:34:03.787 ' 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:03.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.787 --rc genhtml_branch_coverage=1 00:34:03.787 --rc genhtml_function_coverage=1 00:34:03.787 --rc genhtml_legend=1 00:34:03.787 --rc geninfo_all_blocks=1 00:34:03.787 --rc geninfo_unexecuted_blocks=1 00:34:03.787 00:34:03.787 ' 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.787 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.788 12:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:11.933 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:11.933 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:11.933 Found net devices under 0000:31:00.0: cvl_0_0 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:11.933 Found net devices under 0000:31:00.1: cvl_0_1 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.933 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:34:11.934 00:34:11.934 --- 10.0.0.2 ping statistics --- 00:34:11.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.934 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:34:11.934 00:34:11.934 --- 10.0.0.1 ping statistics --- 00:34:11.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.934 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2199786 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2199786 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2199786 ']' 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:11.934 12:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.934 [2024-10-11 12:11:13.946286] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:11.934 [2024-10-11 12:11:13.947403] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:11.934 [2024-10-11 12:11:13.947454] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.934 [2024-10-11 12:11:14.038577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:11.934 [2024-10-11 12:11:14.091810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.934 [2024-10-11 12:11:14.091863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.934 [2024-10-11 12:11:14.091872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.934 [2024-10-11 12:11:14.091879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.934 [2024-10-11 12:11:14.091886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.934 [2024-10-11 12:11:14.093943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.934 [2024-10-11 12:11:14.094120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.934 [2024-10-11 12:11:14.094213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.934 [2024-10-11 12:11:14.094216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.934 [2024-10-11 12:11:14.171929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:11.934 [2024-10-11 12:11:14.173214] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:11.934 [2024-10-11 12:11:14.173363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:11.934 [2024-10-11 12:11:14.173738] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:11.934 [2024-10-11 12:11:14.173778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:12.195 12:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:12.195 12:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:34:12.195 12:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:12.195 12:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:12.195 12:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.195 12:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.195 12:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:12.456 [2024-10-11 12:11:14.987534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.456 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:12.717 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:12.717 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:12.977 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:12.977 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:12.977 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:12.977 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:13.237 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:13.237 12:11:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:13.497 12:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:13.758 12:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:13.758 12:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:13.758 12:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:13.758 12:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:14.019 12:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:14.019 12:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:14.281 12:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:14.541 12:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:14.541 12:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:14.541 12:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:14.541 12:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:14.802 12:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.062 [2024-10-11 12:11:17.571478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.062 12:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:15.323 12:11:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:15.323 12:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:15.895 12:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:15.895 12:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:34:15.895 12:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:15.895 12:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:34:15.895 12:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:34:15.895 12:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:34:17.807 12:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:17.807 12:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:17.807 12:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:17.808 12:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:34:17.808 12:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:17.808 12:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:34:17.808 12:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:17.808 [global] 00:34:17.808 thread=1 00:34:17.808 invalidate=1 00:34:17.808 rw=write 00:34:17.808 time_based=1 00:34:17.808 runtime=1 00:34:17.808 ioengine=libaio 00:34:17.808 direct=1 00:34:17.808 bs=4096 00:34:17.808 iodepth=1 00:34:17.808 norandommap=0 00:34:17.808 numjobs=1 00:34:17.808 00:34:17.808 verify_dump=1 00:34:17.808 verify_backlog=512 00:34:17.808 verify_state_save=0 00:34:17.808 do_verify=1 00:34:17.808 verify=crc32c-intel 00:34:17.808 [job0] 00:34:17.808 filename=/dev/nvme0n1 00:34:17.808 [job1] 00:34:17.808 filename=/dev/nvme0n2 00:34:17.808 [job2] 00:34:17.808 filename=/dev/nvme0n3 00:34:17.808 [job3] 00:34:17.808 filename=/dev/nvme0n4 00:34:18.089 Could not set queue depth (nvme0n1) 00:34:18.089 Could not set queue depth (nvme0n2) 00:34:18.089 Could not set queue depth (nvme0n3) 00:34:18.089 Could not set queue depth (nvme0n4) 00:34:18.355 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:18.355 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:18.355 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:18.355 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:18.355 fio-3.35 00:34:18.355 Starting 4 threads 00:34:19.743 00:34:19.743 job0: (groupid=0, jobs=1): err= 0: pid=2201368: Fri Oct 11 12:11:22 2024 00:34:19.743 read: IOPS=253, BW=1015KiB/s (1039kB/s)(1016KiB/1001msec) 00:34:19.743 slat (nsec): min=6834, max=38486, avg=20963.37, stdev=7456.96 00:34:19.743 clat (usec): min=298, max=41963, avg=2942.94, stdev=8867.83 00:34:19.743 lat (usec): min=306, max=41994, avg=2963.91, stdev=8869.43 00:34:19.743 clat percentiles (usec): 00:34:19.743 | 1.00th=[ 627], 5.00th=[ 709], 10.00th=[ 766], 20.00th=[ 799], 00:34:19.743 | 30.00th=[ 832], 40.00th=[ 865], 50.00th=[ 889], 60.00th=[ 922], 00:34:19.743 | 70.00th=[ 955], 80.00th=[ 996], 90.00th=[ 1090], 95.00th=[40633], 00:34:19.743 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:34:19.743 | 99.99th=[42206] 00:34:19.743 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:19.743 slat (usec): min=9, max=113, avg=29.62, stdev=10.16 00:34:19.743 clat (usec): min=136, max=902, avg=443.43, stdev=93.53 00:34:19.743 lat (usec): min=169, max=935, avg=473.05, stdev=97.31 00:34:19.743 clat percentiles (usec): 00:34:19.743 | 1.00th=[ 239], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 355], 00:34:19.743 | 30.00th=[ 392], 40.00th=[ 429], 50.00th=[ 453], 60.00th=[ 478], 00:34:19.743 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 586], 00:34:19.743 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 906], 99.95th=[ 906], 00:34:19.743 | 99.99th=[ 906] 00:34:19.743 bw ( KiB/s): min= 4096, max= 4096, per=40.92%, avg=4096.00, stdev= 0.00, samples=1 00:34:19.743 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:19.743 lat (usec) : 250=0.91%, 500=46.74%, 750=22.06%, 1000=24.28% 00:34:19.743 lat (msec) : 2=4.31%, 50=1.70% 00:34:19.743 cpu : usr=1.40%, sys=2.00%, ctx=767, majf=0, minf=1 00:34:19.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.743 issued rwts: total=254,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:19.744 job1: (groupid=0, jobs=1): err= 0: pid=2201369: Fri Oct 11 12:11:22 2024 00:34:19.744 read: IOPS=18, BW=74.3KiB/s (76.1kB/s)(76.0KiB/1023msec) 00:34:19.744 slat (nsec): min=8156, max=14155, avg=10830.68, stdev=1063.74 00:34:19.744 clat (usec): min=40642, max=41162, avg=40967.80, stdev=131.40 00:34:19.744 lat (usec): min=40650, max=41173, avg=40978.63, stdev=131.44 00:34:19.744 clat percentiles (usec): 00:34:19.744 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:34:19.744 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:19.744 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:19.744 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:19.744 | 99.99th=[41157] 00:34:19.744 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:34:19.744 slat (nsec): min=6744, max=59153, avg=21435.02, stdev=13354.00 00:34:19.744 clat (usec): min=177, max=957, avg=450.09, stdev=130.20 00:34:19.744 lat (usec): min=220, max=969, avg=471.52, stdev=131.61 00:34:19.744 clat percentiles (usec): 00:34:19.744 | 1.00th=[ 212], 5.00th=[ 281], 10.00th=[ 297], 20.00th=[ 338], 00:34:19.744 | 30.00th=[ 367], 40.00th=[ 392], 50.00th=[ 445], 60.00th=[ 478], 00:34:19.744 | 70.00th=[ 506], 80.00th=[ 545], 90.00th=[ 611], 95.00th=[ 701], 00:34:19.744 | 99.00th=[ 824], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 955], 00:34:19.744 | 99.99th=[ 955] 00:34:19.744 bw ( KiB/s): min= 4096, max= 4096, per=40.92%, avg=4096.00, stdev= 0.00, samples=1 00:34:19.744 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:19.744 lat (usec) : 250=2.07%, 500=63.28%, 750=28.44%, 1000=2.64% 00:34:19.744 lat (msec) : 50=3.58% 00:34:19.744 cpu : usr=0.39%, sys=1.37%, ctx=532, majf=0, minf=1 00:34:19.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.744 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:19.744 job2: (groupid=0, jobs=1): err= 0: pid=2201370: Fri Oct 11 12:11:22 2024 00:34:19.744 read: IOPS=18, BW=74.5KiB/s (76.3kB/s)(76.0KiB/1020msec) 00:34:19.744 slat (nsec): min=27089, max=27854, avg=27374.53, stdev=224.28 00:34:19.744 clat (usec): min=40808, max=41915, avg=41211.25, stdev=389.89 00:34:19.744 lat (usec): min=40835, max=41942, avg=41238.62, stdev=389.88 00:34:19.744 clat percentiles (usec): 00:34:19.744 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:19.744 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:19.744 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:19.744 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:19.744 | 99.99th=[41681] 00:34:19.744 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:34:19.744 slat (usec): min=9, max=2017, avg=33.95, stdev=97.11 00:34:19.744 clat (usec): min=168, max=683, avg=420.77, stdev=93.95 00:34:19.744 lat (usec): min=179, max=2466, avg=454.73, stdev=137.75 00:34:19.744 clat percentiles (usec): 00:34:19.744 | 1.00th=[ 227], 5.00th=[ 281], 10.00th=[ 297], 20.00th=[ 334], 00:34:19.744 | 30.00th=[ 359], 40.00th=[ 388], 50.00th=[ 412], 60.00th=[ 457], 00:34:19.744 | 70.00th=[ 482], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 570], 00:34:19.744 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 685], 99.95th=[ 685], 00:34:19.744 | 99.99th=[ 685] 00:34:19.744 bw ( KiB/s): min= 4096, max= 4096, per=40.92%, avg=4096.00, stdev= 0.00, samples=1 00:34:19.744 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:19.744 lat (usec) : 250=1.88%, 500=70.24%, 750=24.29% 00:34:19.744 lat (msec) : 50=3.58% 00:34:19.744 cpu : usr=0.39%, sys=1.67%, ctx=534, majf=0, minf=1 00:34:19.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.744 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:19.744 job3: (groupid=0, jobs=1): err= 0: pid=2201371: Fri Oct 11 12:11:22 2024 00:34:19.744 read: IOPS=609, BW=2438KiB/s (2496kB/s)(2440KiB/1001msec) 00:34:19.744 slat (nsec): min=7089, max=60131, avg=22915.04, stdev=7431.05 00:34:19.744 clat (usec): min=337, max=953, avg=790.16, stdev=81.33 00:34:19.744 lat (usec): min=345, max=979, avg=813.08, stdev=83.25 00:34:19.744 clat percentiles (usec): 00:34:19.744 | 1.00th=[ 529], 5.00th=[ 644], 10.00th=[ 685], 20.00th=[ 742], 00:34:19.744 | 30.00th=[ 766], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 816], 00:34:19.744 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 898], 00:34:19.744 | 99.00th=[ 930], 99.50th=[ 930], 99.90th=[ 955], 99.95th=[ 955], 00:34:19.744 | 99.99th=[ 955] 00:34:19.744 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:19.744 slat (usec): min=9, max=113, avg=28.43, stdev=10.01 00:34:19.744 clat (usec): min=169, max=1072, avg=452.93, stdev=99.63 00:34:19.744 lat (usec): min=202, max=1084, avg=481.36, stdev=102.32 00:34:19.744 clat percentiles (usec): 00:34:19.744 | 1.00th=[ 243], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 367], 00:34:19.744 | 30.00th=[ 416], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 469], 00:34:19.744 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 562], 95.00th=[ 619], 00:34:19.744 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 848], 99.95th=[ 1074], 00:34:19.744 | 99.99th=[ 1074] 00:34:19.744 bw ( KiB/s): min= 4096, max= 4096, per=40.92%, avg=4096.00, stdev= 0.00, samples=1 00:34:19.744 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:19.744 lat (usec) : 250=0.92%, 500=46.70%, 750=22.83%, 1000=29.50% 00:34:19.744 lat (msec) : 2=0.06% 00:34:19.744 cpu : usr=2.40%, sys=4.40%, ctx=1635, majf=0, minf=1 00:34:19.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:19.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:19.744 issued rwts: total=610,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:19.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:19.744 00:34:19.744 Run status group 0 (all jobs): 00:34:19.744 READ: bw=3527KiB/s (3612kB/s), 74.3KiB/s-2438KiB/s (76.1kB/s-2496kB/s), io=3608KiB (3695kB), run=1001-1023msec 00:34:19.744 WRITE: bw=9.77MiB/s (10.2MB/s), 2002KiB/s-4092KiB/s (2050kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1023msec 00:34:19.744 00:34:19.744 Disk stats (read/write): 00:34:19.744 nvme0n1: ios=125/512, merge=0/0, ticks=638/221, in_queue=859, util=86.27% 00:34:19.744 nvme0n2: ios=56/512, merge=0/0, ticks=705/225, in_queue=930, util=91.11% 00:34:19.744 nvme0n3: ios=76/512, merge=0/0, ticks=798/212, in_queue=1010, util=91.95% 00:34:19.744 nvme0n4: ios=569/837, merge=0/0, ticks=526/376, in_queue=902, util=97.43% 00:34:19.744 12:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:19.744 [global] 00:34:19.744 thread=1 00:34:19.744 invalidate=1 00:34:19.744 rw=randwrite 00:34:19.744 time_based=1 00:34:19.744 runtime=1 00:34:19.744 ioengine=libaio 00:34:19.744 direct=1 00:34:19.744 bs=4096 00:34:19.744 iodepth=1 00:34:19.744 norandommap=0 00:34:19.744 numjobs=1 00:34:19.744 00:34:19.744 verify_dump=1 00:34:19.744 verify_backlog=512 00:34:19.744 verify_state_save=0 00:34:19.744 do_verify=1 00:34:19.744 verify=crc32c-intel 00:34:19.744 [job0] 00:34:19.744 filename=/dev/nvme0n1 00:34:19.744 [job1] 00:34:19.744 filename=/dev/nvme0n2 00:34:19.744 [job2] 00:34:19.744 filename=/dev/nvme0n3 00:34:19.744 [job3] 00:34:19.744 filename=/dev/nvme0n4 00:34:19.744 Could not set queue depth (nvme0n1) 00:34:19.744 Could not set queue depth (nvme0n2) 00:34:19.744 Could not set queue depth (nvme0n3) 00:34:19.744 Could not set queue depth (nvme0n4) 00:34:20.004 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:20.004 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:20.004 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:20.004 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:20.004 fio-3.35 00:34:20.004 Starting 4 threads 00:34:21.415 00:34:21.415 job0: (groupid=0, jobs=1): err= 0: pid=2201889: Fri Oct 11 12:11:23 2024 00:34:21.415 read: IOPS=435, BW=1742KiB/s (1784kB/s)(1744KiB/1001msec) 00:34:21.415 slat (nsec): min=7960, max=60573, avg=25544.75, stdev=4224.42 00:34:21.415 clat (usec): min=446, max=41675, avg=1388.61, stdev=2743.81 00:34:21.415 lat (usec): min=471, max=41718, avg=1414.15, stdev=2744.40 00:34:21.415 clat percentiles (usec): 00:34:21.415 | 1.00th=[ 529], 5.00th=[ 881], 10.00th=[ 1004], 20.00th=[ 1090], 00:34:21.415 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1221], 60.00th=[ 1254], 00:34:21.415 | 70.00th=[ 1303], 80.00th=[ 1352], 90.00th=[ 1418], 95.00th=[ 1450], 00:34:21.415 | 99.00th=[ 1631], 99.50th=[ 1647], 99.90th=[41681], 99.95th=[41681], 00:34:21.415 | 99.99th=[41681] 00:34:21.415 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:21.415 slat (nsec): min=9187, max=61595, avg=30093.03, stdev=6883.42 00:34:21.415 clat (usec): min=142, max=1103, avg=704.10, stdev=167.05 00:34:21.415 lat (usec): min=174, max=1136, avg=734.20, stdev=168.13 00:34:21.415 clat percentiles (usec): 00:34:21.415 | 1.00th=[ 326], 5.00th=[ 408], 10.00th=[ 478], 20.00th=[ 553], 00:34:21.415 | 30.00th=[ 619], 40.00th=[ 676], 50.00th=[ 717], 60.00th=[ 758], 00:34:21.415 | 70.00th=[ 807], 80.00th=[ 857], 90.00th=[ 914], 95.00th=[ 955], 00:34:21.415 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1106], 99.95th=[ 1106], 00:34:21.415 | 99.99th=[ 1106] 00:34:21.415 bw ( KiB/s): min= 4096, max= 4096, per=46.78%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.415 lat (usec) : 250=0.42%, 500=6.43%, 750=25.95%, 1000=24.79% 00:34:21.415 lat (msec) : 2=42.19%, 50=0.21% 00:34:21.415 cpu : usr=1.50%, sys=2.70%, ctx=948, majf=0, minf=1 00:34:21.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.415 issued rwts: total=436,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.415 job1: (groupid=0, jobs=1): err= 0: pid=2201891: Fri Oct 11 12:11:23 2024 00:34:21.415 read: IOPS=17, BW=71.5KiB/s (73.2kB/s)(72.0KiB/1007msec) 00:34:21.415 slat (nsec): min=26868, max=31926, avg=27506.56, stdev=1120.92 00:34:21.415 clat (usec): min=40864, max=41979, avg=41120.13, stdev=356.38 00:34:21.415 lat (usec): min=40891, max=42006, avg=41147.63, stdev=356.22 00:34:21.415 clat percentiles (usec): 00:34:21.415 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:21.415 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:21.415 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:34:21.415 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:21.415 | 99.99th=[42206] 00:34:21.415 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:34:21.415 slat (nsec): min=9577, max=55166, avg=29052.19, stdev=10666.62 00:34:21.415 clat (usec): min=237, max=793, avg=478.60, stdev=109.48 00:34:21.415 lat (usec): min=249, max=827, avg=507.66, stdev=114.00 00:34:21.415 clat percentiles (usec): 00:34:21.415 | 1.00th=[ 281], 5.00th=[ 297], 10.00th=[ 338], 20.00th=[ 379], 00:34:21.415 | 30.00th=[ 420], 40.00th=[ 457], 50.00th=[ 478], 60.00th=[ 494], 00:34:21.415 | 70.00th=[ 523], 80.00th=[ 570], 90.00th=[ 644], 95.00th=[ 676], 00:34:21.415 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 791], 99.95th=[ 791], 00:34:21.415 | 99.99th=[ 791] 00:34:21.415 bw ( KiB/s): min= 4096, max= 4096, per=46.78%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.415 lat (usec) : 250=0.38%, 500=59.81%, 750=35.85%, 1000=0.57% 00:34:21.415 lat (msec) : 50=3.40% 00:34:21.415 cpu : usr=0.89%, sys=1.29%, ctx=531, majf=0, minf=1 00:34:21.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.415 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.415 job2: (groupid=0, jobs=1): err= 0: pid=2201897: Fri Oct 11 12:11:23 2024 00:34:21.415 read: IOPS=129, BW=516KiB/s (528kB/s)(532KiB/1031msec) 00:34:21.415 slat (nsec): min=7123, max=58658, avg=23655.56, stdev=7765.51 00:34:21.415 clat (usec): min=435, max=41958, avg=5900.25, stdev=13554.24 00:34:21.415 lat (usec): min=461, max=41985, avg=5923.91, stdev=13554.84 00:34:21.415 clat percentiles (usec): 00:34:21.415 | 1.00th=[ 553], 5.00th=[ 586], 10.00th=[ 619], 20.00th=[ 660], 00:34:21.415 | 30.00th=[ 693], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 775], 00:34:21.415 | 70.00th=[ 816], 80.00th=[ 873], 90.00th=[41157], 95.00th=[41157], 00:34:21.415 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:21.415 | 99.99th=[42206] 00:34:21.415 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:34:21.415 slat (nsec): min=9457, max=51350, avg=28739.06, stdev=9058.26 00:34:21.415 clat (usec): min=151, max=743, avg=435.12, stdev=119.73 00:34:21.415 lat (usec): min=179, max=775, avg=463.86, stdev=122.73 00:34:21.415 clat percentiles (usec): 00:34:21.415 | 1.00th=[ 249], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 322], 00:34:21.415 | 30.00th=[ 351], 40.00th=[ 379], 50.00th=[ 408], 60.00th=[ 453], 00:34:21.415 | 70.00th=[ 519], 80.00th=[ 570], 90.00th=[ 603], 95.00th=[ 627], 00:34:21.415 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 742], 99.95th=[ 742], 00:34:21.415 | 99.99th=[ 742] 00:34:21.415 bw ( KiB/s): min= 4096, max= 4096, per=46.78%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.415 lat (usec) : 250=0.93%, 500=51.78%, 750=37.21%, 1000=7.29% 00:34:21.415 lat (msec) : 2=0.16%, 50=2.64% 00:34:21.415 cpu : usr=0.78%, sys=1.84%, ctx=645, majf=0, minf=1 00:34:21.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.415 issued rwts: total=133,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.415 job3: (groupid=0, jobs=1): err= 0: pid=2201898: Fri Oct 11 12:11:23 2024 00:34:21.415 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:21.415 slat (nsec): min=24392, max=45277, avg=26792.18, stdev=2206.24 00:34:21.415 clat (usec): min=579, max=1444, avg=1010.64, stdev=122.18 00:34:21.415 lat (usec): min=606, max=1471, avg=1037.44, stdev=122.04 00:34:21.415 clat percentiles (usec): 00:34:21.415 | 1.00th=[ 717], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 914], 00:34:21.415 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1037], 00:34:21.415 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1172], 95.00th=[ 1205], 00:34:21.415 | 99.00th=[ 1319], 99.50th=[ 1418], 99.90th=[ 1450], 99.95th=[ 1450], 00:34:21.415 | 99.99th=[ 1450] 00:34:21.415 write: IOPS=720, BW=2881KiB/s (2950kB/s)(2884KiB/1001msec); 0 zone resets 00:34:21.415 slat (nsec): min=9783, max=79659, avg=32669.06, stdev=7239.15 00:34:21.415 clat (usec): min=172, max=1043, avg=601.52, stdev=146.97 00:34:21.416 lat (usec): min=184, max=1078, avg=634.19, stdev=148.04 00:34:21.416 clat percentiles (usec): 00:34:21.416 | 1.00th=[ 262], 5.00th=[ 359], 10.00th=[ 412], 20.00th=[ 478], 00:34:21.416 | 30.00th=[ 519], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:34:21.416 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 791], 95.00th=[ 840], 00:34:21.416 | 99.00th=[ 947], 99.50th=[ 996], 99.90th=[ 1045], 99.95th=[ 1045], 00:34:21.416 | 99.99th=[ 1045] 00:34:21.416 bw ( KiB/s): min= 4096, max= 4096, per=46.78%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.416 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.416 lat (usec) : 250=0.41%, 500=14.84%, 750=35.60%, 1000=28.39% 00:34:21.416 lat (msec) : 2=20.76% 00:34:21.416 cpu : usr=2.20%, sys=3.60%, ctx=1237, majf=0, minf=1 00:34:21.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.416 issued rwts: total=512,721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.416 00:34:21.416 Run status group 0 (all jobs): 00:34:21.416 READ: bw=4264KiB/s (4366kB/s), 71.5KiB/s-2046KiB/s (73.2kB/s-2095kB/s), io=4396KiB (4502kB), run=1001-1031msec 00:34:21.416 WRITE: bw=8757KiB/s (8967kB/s), 1986KiB/s-2881KiB/s (2034kB/s-2950kB/s), io=9028KiB (9245kB), run=1001-1031msec 00:34:21.416 00:34:21.416 Disk stats (read/write): 00:34:21.416 nvme0n1: ios=346/512, merge=0/0, ticks=489/343, in_queue=832, util=86.97% 00:34:21.416 nvme0n2: ios=56/512, merge=0/0, ticks=1157/219, in_queue=1376, util=93.78% 00:34:21.416 nvme0n3: ios=181/512, merge=0/0, ticks=678/214, in_queue=892, util=95.25% 00:34:21.416 nvme0n4: ios=517/512, merge=0/0, ticks=1435/307, in_queue=1742, util=97.86% 00:34:21.416 12:11:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:21.416 [global] 00:34:21.416 thread=1 00:34:21.416 invalidate=1 00:34:21.416 rw=write 00:34:21.416 time_based=1 00:34:21.416 runtime=1 00:34:21.416 ioengine=libaio 00:34:21.416 direct=1 00:34:21.416 bs=4096 00:34:21.416 iodepth=128 00:34:21.416 norandommap=0 00:34:21.416 numjobs=1 00:34:21.416 00:34:21.416 verify_dump=1 00:34:21.416 verify_backlog=512 00:34:21.416 verify_state_save=0 00:34:21.416 do_verify=1 00:34:21.416 verify=crc32c-intel 00:34:21.416 [job0] 00:34:21.416 filename=/dev/nvme0n1 00:34:21.416 [job1] 00:34:21.416 filename=/dev/nvme0n2 00:34:21.416 [job2] 00:34:21.416 filename=/dev/nvme0n3 00:34:21.416 [job3] 00:34:21.416 filename=/dev/nvme0n4 00:34:21.416 Could not set queue depth (nvme0n1) 00:34:21.416 Could not set queue depth (nvme0n2) 00:34:21.416 Could not set queue depth (nvme0n3) 00:34:21.416 Could not set queue depth (nvme0n4) 00:34:21.680 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.680 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.680 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.680 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.680 fio-3.35 00:34:21.680 Starting 4 threads 00:34:23.083 00:34:23.083 job0: (groupid=0, jobs=1): err= 0: pid=2202416: Fri Oct 11 12:11:25 2024 00:34:23.083 read: IOPS=6715, BW=26.2MiB/s (27.5MB/s)(26.3MiB/1004msec) 00:34:23.083 slat (nsec): min=933, max=28630k, avg=64469.21, stdev=565849.78 00:34:23.083 clat (usec): min=1675, max=68238, avg=8672.49, stdev=6638.80 00:34:23.083 lat (usec): min=1680, max=68346, avg=8736.96, stdev=6675.25 00:34:23.083 clat percentiles (usec): 00:34:23.083 | 1.00th=[ 2442], 5.00th=[ 3294], 10.00th=[ 3982], 20.00th=[ 5211], 00:34:23.083 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 7111], 60.00th=[ 8094], 00:34:23.083 | 70.00th=[ 8979], 80.00th=[10290], 90.00th=[14484], 95.00th=[17695], 00:34:23.083 | 99.00th=[40633], 99.50th=[62129], 99.90th=[68682], 99.95th=[68682], 00:34:23.083 | 99.99th=[68682] 00:34:23.083 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:34:23.083 slat (nsec): min=1636, max=37415k, avg=61713.89, stdev=604236.53 00:34:23.083 clat (usec): min=782, max=101914, avg=8273.86, stdev=10502.73 00:34:23.083 lat (usec): min=851, max=101923, avg=8335.58, stdev=10568.44 00:34:23.083 clat percentiles (msec): 00:34:23.083 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 5], 00:34:23.083 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 7], 00:34:23.083 | 70.00th=[ 7], 80.00th=[ 10], 90.00th=[ 12], 95.00th=[ 19], 00:34:23.083 | 99.00th=[ 62], 99.50th=[ 86], 99.90th=[ 101], 99.95th=[ 103], 00:34:23.083 | 99.99th=[ 103] 00:34:23.083 bw ( KiB/s): min=28280, max=32832, per=34.23%, avg=30556.00, stdev=3218.75, samples=2 00:34:23.083 iops : min= 7070, max= 8208, avg=7639.00, stdev=804.69, samples=2 00:34:23.083 lat (usec) : 1000=0.03% 00:34:23.083 lat (msec) : 2=0.47%, 4=14.91%, 10=66.10%, 20=14.98%, 50=1.96% 00:34:23.083 lat (msec) : 100=1.50%, 250=0.04% 00:34:23.083 cpu : usr=5.18%, sys=7.58%, ctx=445, majf=0, minf=2 00:34:23.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:23.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:23.083 issued rwts: total=6742,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:23.084 job1: (groupid=0, jobs=1): err= 0: pid=2202417: Fri Oct 11 12:11:25 2024 00:34:23.084 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:34:23.084 slat (nsec): min=938, max=18796k, avg=99947.16, stdev=653086.00 00:34:23.084 clat (usec): min=1488, max=76080, avg=12084.06, stdev=8070.79 00:34:23.084 lat (usec): min=1490, max=88371, avg=12184.01, stdev=8167.51 00:34:23.084 clat percentiles (usec): 00:34:23.084 | 1.00th=[ 2737], 5.00th=[ 3458], 10.00th=[ 4015], 20.00th=[ 5866], 00:34:23.084 | 30.00th=[ 8356], 40.00th=[ 9634], 50.00th=[10945], 60.00th=[12780], 00:34:23.084 | 70.00th=[14091], 80.00th=[15664], 90.00th=[18482], 95.00th=[21103], 00:34:23.084 | 99.00th=[51643], 99.50th=[60031], 99.90th=[64750], 99.95th=[64750], 00:34:23.084 | 99.99th=[76022] 00:34:23.084 write: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1007msec); 0 zone resets 00:34:23.084 slat (nsec): min=1658, max=11547k, avg=120912.96, stdev=630755.27 00:34:23.084 clat (usec): min=655, max=108159, avg=17091.49, stdev=17782.30 00:34:23.084 lat (usec): min=780, max=108170, avg=17212.40, stdev=17846.19 00:34:23.084 clat percentiles (usec): 00:34:23.084 | 1.00th=[ 1696], 5.00th=[ 3032], 10.00th=[ 3621], 20.00th=[ 4817], 00:34:23.084 | 30.00th=[ 5735], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 11994], 00:34:23.084 | 70.00th=[ 23725], 80.00th=[ 29230], 90.00th=[ 35390], 95.00th=[ 53740], 00:34:23.084 | 99.00th=[105382], 99.50th=[105382], 99.90th=[105382], 99.95th=[105382], 00:34:23.084 | 99.99th=[108528] 00:34:23.084 bw ( KiB/s): min=16880, max=18632, per=19.89%, avg=17756.00, stdev=1238.85, samples=2 00:34:23.084 iops : min= 4220, max= 4658, avg=4439.00, stdev=309.71, samples=2 00:34:23.084 lat (usec) : 750=0.01% 00:34:23.084 lat (msec) : 2=1.05%, 4=11.61%, 10=36.33%, 20=30.71%, 50=16.67% 00:34:23.084 lat (msec) : 100=2.90%, 250=0.73% 00:34:23.084 cpu : usr=3.48%, sys=4.67%, ctx=427, majf=0, minf=1 00:34:23.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:23.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:23.084 issued rwts: total=4096,4567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:23.084 job2: (groupid=0, jobs=1): err= 0: pid=2202418: Fri Oct 11 12:11:25 2024 00:34:23.084 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:34:23.084 slat (nsec): min=1010, max=10606k, avg=78078.03, stdev=565108.60 00:34:23.084 clat (usec): min=1503, max=53493, avg=9871.46, stdev=5688.65 00:34:23.084 lat (usec): min=1508, max=53503, avg=9949.53, stdev=5747.35 00:34:23.084 clat percentiles (usec): 00:34:23.084 | 1.00th=[ 1942], 5.00th=[ 3589], 10.00th=[ 5407], 20.00th=[ 6718], 00:34:23.084 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8848], 60.00th=[ 9634], 00:34:23.084 | 70.00th=[11076], 80.00th=[12125], 90.00th=[13042], 95.00th=[17957], 00:34:23.084 | 99.00th=[37487], 99.50th=[43254], 99.90th=[49546], 99.95th=[53740], 00:34:23.084 | 99.99th=[53740] 00:34:23.084 write: IOPS=5214, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1007msec); 0 zone resets 00:34:23.084 slat (nsec): min=1716, max=39335k, avg=104017.28, stdev=770644.44 00:34:23.084 clat (usec): min=2095, max=63973, avg=13710.18, stdev=12999.52 00:34:23.084 lat (usec): min=2104, max=63981, avg=13814.19, stdev=13094.45 00:34:23.084 clat percentiles (usec): 00:34:23.084 | 1.00th=[ 2802], 5.00th=[ 4752], 10.00th=[ 5342], 20.00th=[ 6325], 00:34:23.084 | 30.00th=[ 6783], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[ 9110], 00:34:23.084 | 70.00th=[10290], 80.00th=[20317], 90.00th=[34341], 95.00th=[45876], 00:34:23.084 | 99.00th=[61080], 99.50th=[62653], 99.90th=[64226], 99.95th=[64226], 00:34:23.084 | 99.99th=[64226] 00:34:23.084 bw ( KiB/s): min=16384, max=24632, per=22.97%, avg=20508.00, stdev=5832.22, samples=2 00:34:23.084 iops : min= 4096, max= 6158, avg=5127.00, stdev=1458.05, samples=2 00:34:23.084 lat (msec) : 2=0.52%, 4=4.03%, 10=61.12%, 20=22.33%, 50=10.00% 00:34:23.084 lat (msec) : 100=2.00% 00:34:23.084 cpu : usr=3.48%, sys=6.26%, ctx=452, majf=0, minf=1 00:34:23.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:23.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:23.084 issued rwts: total=5120,5251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:23.084 job3: (groupid=0, jobs=1): err= 0: pid=2202419: Fri Oct 11 12:11:25 2024 00:34:23.084 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:34:23.084 slat (nsec): min=922, max=10606k, avg=90751.69, stdev=658393.99 00:34:23.084 clat (usec): min=1541, max=43316, avg=12235.50, stdev=4338.40 00:34:23.084 lat (usec): min=1558, max=48699, avg=12326.25, stdev=4396.97 00:34:23.084 clat percentiles (usec): 00:34:23.084 | 1.00th=[ 3785], 5.00th=[ 6194], 10.00th=[ 7046], 20.00th=[ 8094], 00:34:23.084 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[12649], 00:34:23.084 | 70.00th=[13566], 80.00th=[15270], 90.00th=[17957], 95.00th=[20579], 00:34:23.084 | 99.00th=[24249], 99.50th=[27132], 99.90th=[40633], 99.95th=[40633], 00:34:23.084 | 99.99th=[43254] 00:34:23.084 write: IOPS=4948, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1006msec); 0 zone resets 00:34:23.084 slat (nsec): min=1590, max=12825k, avg=102989.22, stdev=633621.58 00:34:23.084 clat (usec): min=466, max=72626, avg=14318.03, stdev=13200.08 00:34:23.084 lat (usec): min=498, max=72849, avg=14421.02, stdev=13284.05 00:34:23.084 clat percentiles (usec): 00:34:23.084 | 1.00th=[ 1647], 5.00th=[ 2933], 10.00th=[ 5473], 20.00th=[ 6849], 00:34:23.084 | 30.00th=[ 7635], 40.00th=[ 9241], 50.00th=[10290], 60.00th=[11076], 00:34:23.084 | 70.00th=[13960], 80.00th=[17433], 90.00th=[27919], 95.00th=[40633], 00:34:23.084 | 99.00th=[66323], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:34:23.084 | 99.99th=[72877] 00:34:23.084 bw ( KiB/s): min=18320, max=20480, per=21.73%, avg=19400.00, stdev=1527.35, samples=2 00:34:23.084 iops : min= 4580, max= 5120, avg=4850.00, stdev=381.84, samples=2 00:34:23.084 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.13% 00:34:23.084 lat (msec) : 2=0.87%, 4=2.33%, 10=34.28%, 20=49.97%, 50=10.04% 00:34:23.084 lat (msec) : 100=2.36% 00:34:23.084 cpu : usr=3.18%, sys=5.37%, ctx=364, majf=0, minf=1 00:34:23.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:23.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:23.084 issued rwts: total=4608,4978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:23.084 00:34:23.084 Run status group 0 (all jobs): 00:34:23.084 READ: bw=79.8MiB/s (83.7MB/s), 15.9MiB/s-26.2MiB/s (16.7MB/s-27.5MB/s), io=80.3MiB (84.2MB), run=1004-1007msec 00:34:23.084 WRITE: bw=87.2MiB/s (91.4MB/s), 17.7MiB/s-29.9MiB/s (18.6MB/s-31.3MB/s), io=87.8MiB (92.1MB), run=1004-1007msec 00:34:23.084 00:34:23.084 Disk stats (read/write): 00:34:23.084 nvme0n1: ios=4802/5632, merge=0/0, ticks=29233/34485, in_queue=63718, util=84.67% 00:34:23.084 nvme0n2: ios=3635/3655, merge=0/0, ticks=24185/32032, in_queue=56217, util=88.19% 00:34:23.084 nvme0n3: ios=3092/3484, merge=0/0, ticks=33013/57902, in_queue=90915, util=95.78% 00:34:23.084 nvme0n4: ios=3641/3963, merge=0/0, ticks=28831/37421, in_queue=66252, util=92.47% 00:34:23.084 12:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:23.084 [global] 00:34:23.084 thread=1 00:34:23.084 invalidate=1 00:34:23.084 rw=randwrite 00:34:23.084 time_based=1 00:34:23.084 runtime=1 00:34:23.084 ioengine=libaio 00:34:23.084 direct=1 00:34:23.084 bs=4096 00:34:23.084 iodepth=128 00:34:23.084 norandommap=0 00:34:23.084 numjobs=1 00:34:23.084 00:34:23.084 verify_dump=1 00:34:23.084 verify_backlog=512 00:34:23.084 verify_state_save=0 00:34:23.084 do_verify=1 00:34:23.084 verify=crc32c-intel 00:34:23.084 [job0] 00:34:23.084 filename=/dev/nvme0n1 00:34:23.084 [job1] 00:34:23.084 filename=/dev/nvme0n2 00:34:23.084 [job2] 00:34:23.084 filename=/dev/nvme0n3 00:34:23.084 [job3] 00:34:23.084 filename=/dev/nvme0n4 00:34:23.084 Could not set queue depth (nvme0n1) 00:34:23.084 Could not set queue depth (nvme0n2) 00:34:23.084 Could not set queue depth (nvme0n3) 00:34:23.084 Could not set queue depth (nvme0n4) 00:34:23.347 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:23.347 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:23.347 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:23.347 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:23.347 fio-3.35 00:34:23.347 Starting 4 threads 00:34:24.755 00:34:24.755 job0: (groupid=0, jobs=1): err= 0: pid=2202939: Fri Oct 11 12:11:27 2024 00:34:24.755 read: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec) 00:34:24.755 slat (nsec): min=906, max=12414k, avg=61426.50, stdev=490715.86 00:34:24.755 clat (usec): min=2756, max=33235, avg=8499.04, stdev=3214.58 00:34:24.755 lat (usec): min=2765, max=33241, avg=8560.47, stdev=3242.54 00:34:24.755 clat percentiles (usec): 00:34:24.755 | 1.00th=[ 3851], 5.00th=[ 5211], 10.00th=[ 5866], 20.00th=[ 6259], 00:34:24.755 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7635], 60.00th=[ 8160], 00:34:24.755 | 70.00th=[ 8848], 80.00th=[10421], 90.00th=[12125], 95.00th=[15139], 00:34:24.755 | 99.00th=[20841], 99.50th=[24249], 99.90th=[26346], 99.95th=[26346], 00:34:24.755 | 99.99th=[33162] 00:34:24.755 write: IOPS=8070, BW=31.5MiB/s (33.1MB/s)(31.7MiB/1006msec); 0 zone resets 00:34:24.755 slat (nsec): min=1502, max=12391k, avg=57391.91, stdev=480927.48 00:34:24.755 clat (usec): min=584, max=22967, avg=7687.37, stdev=3337.19 00:34:24.755 lat (usec): min=593, max=24276, avg=7744.76, stdev=3356.26 00:34:24.755 clat percentiles (usec): 00:34:24.755 | 1.00th=[ 2835], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5669], 00:34:24.755 | 30.00th=[ 6259], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 7111], 00:34:24.755 | 70.00th=[ 7767], 80.00th=[ 9634], 90.00th=[11076], 95.00th=[15664], 00:34:24.755 | 99.00th=[20841], 99.50th=[21103], 99.90th=[22938], 99.95th=[22938], 00:34:24.755 | 99.99th=[22938] 00:34:24.755 bw ( KiB/s): min=28672, max=35264, per=29.93%, avg=31968.00, stdev=4661.25, samples=2 00:34:24.755 iops : min= 7168, max= 8816, avg=7992.00, stdev=1165.31, samples=2 00:34:24.755 lat (usec) : 750=0.02% 00:34:24.755 lat (msec) : 2=0.27%, 4=1.97%, 10=76.62%, 20=19.87%, 50=1.25% 00:34:24.755 cpu : usr=5.27%, sys=8.46%, ctx=327, majf=0, minf=1 00:34:24.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:24.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:24.755 issued rwts: total=7680,8119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:24.755 job1: (groupid=0, jobs=1): err= 0: pid=2202940: Fri Oct 11 12:11:27 2024 00:34:24.755 read: IOPS=5456, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1003msec) 00:34:24.755 slat (nsec): min=921, max=7656.2k, avg=87793.58, stdev=433228.95 00:34:24.755 clat (usec): min=1038, max=25367, avg=11155.35, stdev=3183.23 00:34:24.755 lat (usec): min=5471, max=25372, avg=11243.14, stdev=3175.81 00:34:24.755 clat percentiles (usec): 00:34:24.755 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 9110], 00:34:24.755 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10421], 00:34:24.755 | 70.00th=[10552], 80.00th=[13042], 90.00th=[15795], 95.00th=[18482], 00:34:24.755 | 99.00th=[23200], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:34:24.755 | 99.99th=[25297] 00:34:24.755 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:34:24.755 slat (nsec): min=1546, max=13624k, avg=88180.31, stdev=558382.76 00:34:24.755 clat (usec): min=5476, max=47300, avg=11675.94, stdev=6992.43 00:34:24.755 lat (usec): min=5486, max=47311, avg=11764.12, stdev=7023.72 00:34:24.755 clat percentiles (usec): 00:34:24.755 | 1.00th=[ 6980], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8455], 00:34:24.755 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9503], 00:34:24.755 | 70.00th=[10028], 80.00th=[12518], 90.00th=[17171], 95.00th=[29230], 00:34:24.755 | 99.00th=[46400], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:34:24.755 | 99.99th=[47449] 00:34:24.755 bw ( KiB/s): min=20480, max=24576, per=21.09%, avg=22528.00, stdev=2896.31, samples=2 00:34:24.755 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:34:24.755 lat (msec) : 2=0.01%, 10=58.24%, 20=37.05%, 50=4.71% 00:34:24.755 cpu : usr=2.89%, sys=5.19%, ctx=652, majf=0, minf=1 00:34:24.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:24.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:24.755 issued rwts: total=5473,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:24.755 job2: (groupid=0, jobs=1): err= 0: pid=2202941: Fri Oct 11 12:11:27 2024 00:34:24.755 read: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec) 00:34:24.755 slat (nsec): min=983, max=11047k, avg=72001.26, stdev=540624.37 00:34:24.755 clat (usec): min=2689, max=25568, avg=9275.08, stdev=2679.20 00:34:24.755 lat (usec): min=2698, max=25594, avg=9347.08, stdev=2705.52 00:34:24.755 clat percentiles (usec): 00:34:24.755 | 1.00th=[ 4113], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7308], 00:34:24.755 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9372], 00:34:24.755 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[12911], 95.00th=[13960], 00:34:24.755 | 99.00th=[16909], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:34:24.755 | 99.99th=[25560] 00:34:24.755 write: IOPS=7434, BW=29.0MiB/s (30.5MB/s)(29.2MiB/1006msec); 0 zone resets 00:34:24.755 slat (nsec): min=1601, max=7045.2k, avg=59182.35, stdev=354510.93 00:34:24.755 clat (usec): min=1195, max=16946, avg=8175.60, stdev=1892.00 00:34:24.755 lat (usec): min=1206, max=16948, avg=8234.78, stdev=1899.85 00:34:24.755 clat percentiles (usec): 00:34:24.755 | 1.00th=[ 3654], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6652], 00:34:24.755 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8455], 00:34:24.755 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[11731], 00:34:24.755 | 99.00th=[12911], 99.50th=[13042], 99.90th=[15664], 99.95th=[16909], 00:34:24.755 | 99.99th=[16909] 00:34:24.755 bw ( KiB/s): min=28176, max=30640, per=27.53%, avg=29408.00, stdev=1742.31, samples=2 00:34:24.755 iops : min= 7044, max= 7660, avg=7352.00, stdev=435.58, samples=2 00:34:24.755 lat (msec) : 2=0.07%, 4=1.34%, 10=79.40%, 20=18.85%, 50=0.34% 00:34:24.755 cpu : usr=5.87%, sys=7.76%, ctx=697, majf=0, minf=1 00:34:24.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:24.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:24.755 issued rwts: total=7168,7479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:24.755 job3: (groupid=0, jobs=1): err= 0: pid=2202943: Fri Oct 11 12:11:27 2024 00:34:24.755 read: IOPS=5355, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1003msec) 00:34:24.755 slat (nsec): min=957, max=11274k, avg=84346.18, stdev=593036.12 00:34:24.755 clat (usec): min=1780, max=28137, avg=11269.18, stdev=3739.72 00:34:24.755 lat (usec): min=3500, max=28139, avg=11353.53, stdev=3770.16 00:34:24.755 clat percentiles (usec): 00:34:24.755 | 1.00th=[ 4621], 5.00th=[ 7308], 10.00th=[ 7635], 20.00th=[ 8586], 00:34:24.755 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11207], 00:34:24.755 | 70.00th=[12387], 80.00th=[13435], 90.00th=[16581], 95.00th=[19006], 00:34:24.755 | 99.00th=[22676], 99.50th=[24773], 99.90th=[27395], 99.95th=[28181], 00:34:24.755 | 99.99th=[28181] 00:34:24.755 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:34:24.755 slat (nsec): min=1572, max=11571k, avg=85020.17, stdev=580320.10 00:34:24.755 clat (usec): min=2392, max=50861, avg=11768.37, stdev=5847.27 00:34:24.755 lat (usec): min=2399, max=50870, avg=11853.39, stdev=5890.18 00:34:24.755 clat percentiles (usec): 00:34:24.755 | 1.00th=[ 4080], 5.00th=[ 5800], 10.00th=[ 6718], 20.00th=[ 7308], 00:34:24.755 | 30.00th=[ 8160], 40.00th=[ 9110], 50.00th=[10945], 60.00th=[12518], 00:34:24.755 | 70.00th=[13173], 80.00th=[15008], 90.00th=[17171], 95.00th=[18744], 00:34:24.755 | 99.00th=[39584], 99.50th=[41681], 99.90th=[46924], 99.95th=[46924], 00:34:24.755 | 99.99th=[51119] 00:34:24.756 bw ( KiB/s): min=20480, max=24576, per=21.09%, avg=22528.00, stdev=2896.31, samples=2 00:34:24.756 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:34:24.756 lat (msec) : 2=0.01%, 4=0.75%, 10=47.38%, 20=47.82%, 50=4.02% 00:34:24.756 lat (msec) : 100=0.02% 00:34:24.756 cpu : usr=4.79%, sys=5.19%, ctx=402, majf=0, minf=1 00:34:24.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:24.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:24.756 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:24.756 00:34:24.756 Run status group 0 (all jobs): 00:34:24.756 READ: bw=99.8MiB/s (105MB/s), 20.9MiB/s-29.8MiB/s (21.9MB/s-31.3MB/s), io=100MiB (105MB), run=1003-1006msec 00:34:24.756 WRITE: bw=104MiB/s (109MB/s), 21.9MiB/s-31.5MiB/s (23.0MB/s-33.1MB/s), io=105MiB (110MB), run=1003-1006msec 00:34:24.756 00:34:24.756 Disk stats (read/write): 00:34:24.756 nvme0n1: ios=6194/6597, merge=0/0, ticks=52086/48752, in_queue=100838, util=86.67% 00:34:24.756 nvme0n2: ios=4779/5120, merge=0/0, ticks=13167/13115, in_queue=26282, util=97.86% 00:34:24.756 nvme0n3: ios=5926/6144, merge=0/0, ticks=53093/48415, in_queue=101508, util=96.41% 00:34:24.756 nvme0n4: ios=4625/4815, merge=0/0, ticks=45205/46868, in_queue=92073, util=91.56% 00:34:24.756 12:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:24.756 12:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2203033 00:34:24.756 12:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:24.756 12:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:24.756 [global] 00:34:24.756 thread=1 00:34:24.756 invalidate=1 00:34:24.756 rw=read 00:34:24.756 time_based=1 00:34:24.756 runtime=10 00:34:24.756 ioengine=libaio 00:34:24.756 direct=1 00:34:24.756 bs=4096 00:34:24.756 iodepth=1 00:34:24.756 norandommap=1 00:34:24.756 numjobs=1 00:34:24.756 00:34:24.756 [job0] 00:34:24.756 filename=/dev/nvme0n1 00:34:24.756 [job1] 00:34:24.756 filename=/dev/nvme0n2 00:34:24.756 [job2] 00:34:24.756 filename=/dev/nvme0n3 00:34:24.756 [job3] 00:34:24.756 filename=/dev/nvme0n4 00:34:24.756 Could not set queue depth (nvme0n1) 00:34:24.756 Could not set queue depth (nvme0n2) 00:34:24.756 Could not set queue depth (nvme0n3) 00:34:24.756 Could not set queue depth (nvme0n4) 00:34:25.017 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.017 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.017 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.017 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:25.017 fio-3.35 00:34:25.017 Starting 4 threads 00:34:27.559 12:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:27.820 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9748480, buflen=4096 00:34:27.820 fio: pid=2203454, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:27.820 12:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:27.820 12:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:27.820 12:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:27.820 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=704512, buflen=4096 00:34:27.820 fio: pid=2203442, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:28.080 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=2969600, buflen=4096 00:34:28.080 fio: pid=2203384, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:34:28.080 12:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.080 12:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:28.341 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11042816, buflen=4096 00:34:28.341 fio: pid=2203410, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:28.341 12:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.341 12:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:28.341 00:34:28.341 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2203384: Fri Oct 11 12:11:30 2024 00:34:28.341 read: IOPS=248, BW=991KiB/s (1015kB/s)(2900KiB/2927msec) 00:34:28.341 slat (usec): min=8, max=6998, avg=37.29, stdev=264.91 00:34:28.341 clat (usec): min=638, max=42164, avg=3992.87, stdev=10252.19 00:34:28.341 lat (usec): min=663, max=43047, avg=4020.55, stdev=10259.88 00:34:28.341 clat percentiles (usec): 00:34:28.341 | 1.00th=[ 816], 5.00th=[ 979], 10.00th=[ 1029], 20.00th=[ 1074], 00:34:28.341 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1188], 60.00th=[ 1270], 00:34:28.341 | 70.00th=[ 1352], 80.00th=[ 1401], 90.00th=[ 1500], 95.00th=[41157], 00:34:28.341 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.341 | 99.99th=[42206] 00:34:28.341 bw ( KiB/s): min= 792, max= 1976, per=14.93%, avg=1144.00, stdev=477.76, samples=5 00:34:28.341 iops : min= 198, max= 494, avg=286.00, stdev=119.44, samples=5 00:34:28.341 lat (usec) : 750=0.41%, 1000=6.61% 00:34:28.341 lat (msec) : 2=85.95%, 50=6.89% 00:34:28.341 cpu : usr=0.34%, sys=0.89%, ctx=728, majf=0, minf=2 00:34:28.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.341 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.341 issued rwts: total=726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.341 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2203410: Fri Oct 11 12:11:30 2024 00:34:28.341 read: IOPS=864, BW=3458KiB/s (3540kB/s)(10.5MiB/3119msec) 00:34:28.341 slat (usec): min=6, max=15038, avg=50.25, stdev=526.76 00:34:28.341 clat (usec): min=527, max=41792, avg=1090.22, stdev=1569.02 00:34:28.341 lat (usec): min=553, max=41818, avg=1140.48, stdev=1654.56 00:34:28.341 clat percentiles (usec): 00:34:28.341 | 1.00th=[ 685], 5.00th=[ 775], 10.00th=[ 832], 20.00th=[ 906], 00:34:28.341 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1045], 00:34:28.341 | 70.00th=[ 1090], 80.00th=[ 1156], 90.00th=[ 1221], 95.00th=[ 1270], 00:34:28.341 | 99.00th=[ 1500], 99.50th=[ 2057], 99.90th=[41157], 99.95th=[41681], 00:34:28.341 | 99.99th=[41681] 00:34:28.341 bw ( KiB/s): min= 2728, max= 4144, per=45.77%, avg=3506.00, stdev=578.26, samples=6 00:34:28.341 iops : min= 682, max= 1036, avg=876.50, stdev=144.57, samples=6 00:34:28.341 lat (usec) : 750=3.26%, 1000=41.60% 00:34:28.341 lat (msec) : 2=54.54%, 4=0.37%, 10=0.04%, 50=0.15% 00:34:28.341 cpu : usr=1.41%, sys=3.18%, ctx=2703, majf=0, minf=1 00:34:28.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.341 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.341 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.341 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2203442: Fri Oct 11 12:11:30 2024 00:34:28.341 read: IOPS=62, BW=247KiB/s (253kB/s)(688KiB/2781msec) 00:34:28.341 slat (nsec): min=8859, max=43026, avg=25619.47, stdev=2658.08 00:34:28.341 clat (usec): min=710, max=42134, avg=15970.16, stdev=19448.39 00:34:28.341 lat (usec): min=736, max=42158, avg=15995.78, stdev=19448.29 00:34:28.341 clat percentiles (usec): 00:34:28.341 | 1.00th=[ 725], 5.00th=[ 988], 10.00th=[ 1057], 20.00th=[ 1106], 00:34:28.341 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1598], 00:34:28.341 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:28.341 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.341 | 99.99th=[42206] 00:34:28.341 bw ( KiB/s): min= 96, max= 744, per=3.46%, avg=265.60, stdev=277.32, samples=5 00:34:28.341 iops : min= 24, max= 186, avg=66.40, stdev=69.33, samples=5 00:34:28.341 lat (usec) : 750=1.16%, 1000=6.36% 00:34:28.341 lat (msec) : 2=54.91%, 50=36.99% 00:34:28.341 cpu : usr=0.07%, sys=0.18%, ctx=173, majf=0, minf=2 00:34:28.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.341 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.341 issued rwts: total=173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.341 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2203454: Fri Oct 11 12:11:30 2024 00:34:28.341 read: IOPS=933, BW=3732KiB/s (3821kB/s)(9520KiB/2551msec) 00:34:28.341 slat (nsec): min=7170, max=60957, avg=25943.92, stdev=3557.96 00:34:28.341 clat (usec): min=469, max=1520, avg=1028.45, stdev=96.18 00:34:28.341 lat (usec): min=477, max=1546, avg=1054.39, stdev=96.38 00:34:28.341 clat percentiles (usec): 00:34:28.341 | 1.00th=[ 783], 5.00th=[ 881], 10.00th=[ 922], 20.00th=[ 955], 00:34:28.341 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1045], 00:34:28.341 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:34:28.341 | 99.00th=[ 1270], 99.50th=[ 1319], 99.90th=[ 1385], 99.95th=[ 1418], 00:34:28.341 | 99.99th=[ 1516] 00:34:28.341 bw ( KiB/s): min= 3744, max= 3848, per=49.29%, avg=3776.00, stdev=41.18, samples=5 00:34:28.341 iops : min= 936, max= 962, avg=944.00, stdev=10.30, samples=5 00:34:28.341 lat (usec) : 500=0.04%, 750=0.67%, 1000=38.18% 00:34:28.341 lat (msec) : 2=61.07% 00:34:28.341 cpu : usr=1.22%, sys=2.59%, ctx=2381, majf=0, minf=2 00:34:28.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.341 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.341 issued rwts: total=2381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.341 00:34:28.341 Run status group 0 (all jobs): 00:34:28.341 READ: bw=7660KiB/s (7844kB/s), 247KiB/s-3732KiB/s (253kB/s-3821kB/s), io=23.3MiB (24.5MB), run=2551-3119msec 00:34:28.341 00:34:28.341 Disk stats (read/write): 00:34:28.341 nvme0n1: ios=721/0, merge=0/0, ticks=2703/0, in_queue=2703, util=92.72% 00:34:28.342 nvme0n2: ios=2650/0, merge=0/0, ticks=2680/0, in_queue=2680, util=92.90% 00:34:28.342 nvme0n3: ios=173/0, merge=0/0, ticks=2759/0, in_queue=2759, util=95.58% 00:34:28.342 nvme0n4: ios=2380/0, merge=0/0, ticks=2407/0, in_queue=2407, util=96.34% 00:34:28.602 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.602 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:28.602 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.602 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:28.862 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:28.862 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:29.121 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:29.121 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:29.121 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:29.121 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2203033 00:34:29.121 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:29.121 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:29.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:29.382 nvmf hotplug test: fio failed as expected 00:34:29.382 12:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.642 rmmod nvme_tcp 00:34:29.642 rmmod nvme_fabrics 00:34:29.642 rmmod nvme_keyring 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2199786 ']' 00:34:29.642 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2199786 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2199786 ']' 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2199786 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2199786 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2199786' 00:34:29.643 killing process with pid 2199786 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2199786 00:34:29.643 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2199786 00:34:29.903 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.904 12:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.816 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:31.816 00:34:31.816 real 0m28.481s 00:34:31.816 user 2m14.346s 00:34:31.816 sys 0m12.613s 00:34:31.816 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:31.816 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.816 ************************************ 00:34:31.816 END TEST nvmf_fio_target 00:34:31.816 ************************************ 00:34:31.816 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:31.816 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:31.816 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:31.816 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:32.077 ************************************ 00:34:32.077 START TEST nvmf_bdevio 00:34:32.077 ************************************ 00:34:32.077 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:32.077 * Looking for test storage... 00:34:32.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:32.077 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:32.077 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:32.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.078 --rc genhtml_branch_coverage=1 00:34:32.078 --rc genhtml_function_coverage=1 00:34:32.078 --rc genhtml_legend=1 00:34:32.078 --rc geninfo_all_blocks=1 00:34:32.078 --rc geninfo_unexecuted_blocks=1 00:34:32.078 00:34:32.078 ' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:32.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.078 --rc genhtml_branch_coverage=1 00:34:32.078 --rc genhtml_function_coverage=1 00:34:32.078 --rc genhtml_legend=1 00:34:32.078 --rc geninfo_all_blocks=1 00:34:32.078 --rc geninfo_unexecuted_blocks=1 00:34:32.078 00:34:32.078 ' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:32.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.078 --rc genhtml_branch_coverage=1 00:34:32.078 --rc genhtml_function_coverage=1 00:34:32.078 --rc genhtml_legend=1 00:34:32.078 --rc geninfo_all_blocks=1 00:34:32.078 --rc geninfo_unexecuted_blocks=1 00:34:32.078 00:34:32.078 ' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:32.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.078 --rc genhtml_branch_coverage=1 00:34:32.078 --rc genhtml_function_coverage=1 00:34:32.078 --rc genhtml_legend=1 00:34:32.078 --rc geninfo_all_blocks=1 00:34:32.078 --rc geninfo_unexecuted_blocks=1 00:34:32.078 00:34:32.078 ' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:32.078 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.079 12:11:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:40.216 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:40.217 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.217 12:11:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:40.217 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:40.217 Found net devices under 0000:31:00.0: cvl_0_0 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:40.217 Found net devices under 0000:31:00.1: cvl_0_1 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:40.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:40.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:34:40.217 00:34:40.217 --- 10.0.0.2 ping statistics --- 00:34:40.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.217 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:40.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:40.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:34:40.217 00:34:40.217 --- 10.0.0.1 ping statistics --- 00:34:40.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.217 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2208556 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2208556 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2208556 ']' 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:40.217 12:11:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.217 [2024-10-11 12:11:42.411663] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:40.217 [2024-10-11 12:11:42.412620] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:40.217 [2024-10-11 12:11:42.412656] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.218 [2024-10-11 12:11:42.492811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:40.218 [2024-10-11 12:11:42.545260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.218 [2024-10-11 12:11:42.545304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.218 [2024-10-11 12:11:42.545317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.218 [2024-10-11 12:11:42.545328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.218 [2024-10-11 12:11:42.545338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.218 [2024-10-11 12:11:42.547628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:40.218 [2024-10-11 12:11:42.547780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:40.218 [2024-10-11 12:11:42.547934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:40.218 [2024-10-11 12:11:42.547937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:40.218 [2024-10-11 12:11:42.624550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:40.218 [2024-10-11 12:11:42.624962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:40.218 [2024-10-11 12:11:42.625545] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:40.218 [2024-10-11 12:11:42.626172] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:40.218 [2024-10-11 12:11:42.626205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.790 [2024-10-11 12:11:43.244938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.790 Malloc0 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:40.790 [2024-10-11 12:11:43.341265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:40.790 { 00:34:40.790 "params": { 00:34:40.790 "name": "Nvme$subsystem", 00:34:40.790 "trtype": "$TEST_TRANSPORT", 00:34:40.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.790 "adrfam": "ipv4", 00:34:40.790 "trsvcid": "$NVMF_PORT", 00:34:40.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.790 "hdgst": ${hdgst:-false}, 00:34:40.790 "ddgst": ${ddgst:-false} 00:34:40.790 }, 00:34:40.790 "method": "bdev_nvme_attach_controller" 00:34:40.790 } 00:34:40.790 EOF 00:34:40.790 )") 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:34:40.790 12:11:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:40.790 "params": { 00:34:40.790 "name": "Nvme1", 00:34:40.790 "trtype": "tcp", 00:34:40.790 "traddr": "10.0.0.2", 00:34:40.790 "adrfam": "ipv4", 00:34:40.790 "trsvcid": "4420", 00:34:40.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:40.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:40.790 "hdgst": false, 00:34:40.790 "ddgst": false 00:34:40.790 }, 00:34:40.790 "method": "bdev_nvme_attach_controller" 00:34:40.790 }' 00:34:40.790 [2024-10-11 12:11:43.399493] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:40.790 [2024-10-11 12:11:43.399557] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208633 ] 00:34:40.790 [2024-10-11 12:11:43.482016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:41.051 [2024-10-11 12:11:43.540616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.051 [2024-10-11 12:11:43.540777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.051 [2024-10-11 12:11:43.540777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.312 I/O targets: 00:34:41.312 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:41.312 00:34:41.312 00:34:41.312 CUnit - A unit testing framework for C - Version 2.1-3 00:34:41.312 http://cunit.sourceforge.net/ 00:34:41.312 00:34:41.312 00:34:41.312 Suite: bdevio tests on: Nvme1n1 00:34:41.312 Test: blockdev write read block ...passed 00:34:41.312 Test: blockdev write zeroes read block ...passed 00:34:41.312 Test: blockdev write zeroes read no split ...passed 00:34:41.312 Test: blockdev write zeroes read split ...passed 00:34:41.572 Test: blockdev write zeroes read split partial ...passed 00:34:41.572 Test: blockdev reset ...[2024-10-11 12:11:44.036049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.572 [2024-10-11 12:11:44.036169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce440 (9): Bad file descriptor 00:34:41.572 [2024-10-11 12:11:44.132055] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:41.572 passed 00:34:41.572 Test: blockdev write read 8 blocks ...passed 00:34:41.572 Test: blockdev write read size > 128k ...passed 00:34:41.572 Test: blockdev write read invalid size ...passed 00:34:41.572 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:41.572 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:41.572 Test: blockdev write read max offset ...passed 00:34:41.572 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:41.833 Test: blockdev writev readv 8 blocks ...passed 00:34:41.833 Test: blockdev writev readv 30 x 1block ...passed 00:34:41.833 Test: blockdev writev readv block ...passed 00:34:41.833 Test: blockdev writev readv size > 128k ...passed 00:34:41.833 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:41.833 Test: blockdev comparev and writev ...[2024-10-11 12:11:44.358319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:41.833 [2024-10-11 12:11:44.358370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.358387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:41.834 [2024-10-11 12:11:44.358397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.359075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:41.834 [2024-10-11 12:11:44.359088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.359103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:41.834 [2024-10-11 12:11:44.359111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.359754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:41.834 [2024-10-11 12:11:44.359765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.359780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:41.834 [2024-10-11 12:11:44.359788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.360457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:41.834 [2024-10-11 12:11:44.360469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.360492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:41.834 [2024-10-11 12:11:44.360500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:41.834 passed 00:34:41.834 Test: blockdev nvme passthru rw ...passed 00:34:41.834 Test: blockdev nvme passthru vendor specific ...[2024-10-11 12:11:44.444974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:41.834 [2024-10-11 12:11:44.444990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.445379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:41.834 [2024-10-11 12:11:44.445391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.445781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:41.834 [2024-10-11 12:11:44.445793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:41.834 [2024-10-11 12:11:44.446188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:41.834 [2024-10-11 12:11:44.446199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:41.834 passed 00:34:41.834 Test: blockdev nvme admin passthru ...passed 00:34:41.834 Test: blockdev copy ...passed 00:34:41.834 00:34:41.834 Run Summary: Type Total Ran Passed Failed Inactive 00:34:41.834 suites 1 1 n/a 0 0 00:34:41.834 tests 23 23 23 0 0 00:34:41.834 asserts 152 152 152 0 n/a 00:34:41.834 00:34:41.834 Elapsed time = 1.358 seconds 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:42.095 rmmod nvme_tcp 00:34:42.095 rmmod nvme_fabrics 00:34:42.095 rmmod nvme_keyring 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2208556 ']' 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2208556 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2208556 ']' 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2208556 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2208556 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2208556' 00:34:42.095 killing process with pid 2208556 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2208556 00:34:42.095 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2208556 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:42.356 12:11:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.902 12:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:44.902 00:34:44.902 real 0m12.538s 00:34:44.902 user 0m10.753s 00:34:44.902 sys 0m6.588s 00:34:44.902 12:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:44.902 12:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:44.902 ************************************ 00:34:44.902 END TEST nvmf_bdevio 00:34:44.902 ************************************ 00:34:44.902 12:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:44.902 00:34:44.902 real 5m4.835s 00:34:44.902 user 10m10.657s 00:34:44.902 sys 2m7.517s 00:34:44.902 12:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:44.902 12:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:44.902 ************************************ 00:34:44.902 END TEST nvmf_target_core_interrupt_mode 00:34:44.902 ************************************ 00:34:44.902 12:11:47 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:44.902 12:11:47 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:44.902 12:11:47 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:44.902 12:11:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.902 ************************************ 00:34:44.902 START TEST nvmf_interrupt 00:34:44.902 ************************************ 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:44.902 * Looking for test storage... 00:34:44.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:44.902 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:44.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.903 --rc genhtml_branch_coverage=1 00:34:44.903 --rc genhtml_function_coverage=1 00:34:44.903 --rc genhtml_legend=1 00:34:44.903 --rc geninfo_all_blocks=1 00:34:44.903 --rc geninfo_unexecuted_blocks=1 00:34:44.903 00:34:44.903 ' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:44.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.903 --rc genhtml_branch_coverage=1 00:34:44.903 --rc genhtml_function_coverage=1 00:34:44.903 --rc genhtml_legend=1 00:34:44.903 --rc geninfo_all_blocks=1 00:34:44.903 --rc geninfo_unexecuted_blocks=1 00:34:44.903 00:34:44.903 ' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:44.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.903 --rc genhtml_branch_coverage=1 00:34:44.903 --rc genhtml_function_coverage=1 00:34:44.903 --rc genhtml_legend=1 00:34:44.903 --rc geninfo_all_blocks=1 00:34:44.903 --rc geninfo_unexecuted_blocks=1 00:34:44.903 00:34:44.903 ' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:44.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.903 --rc genhtml_branch_coverage=1 00:34:44.903 --rc genhtml_function_coverage=1 00:34:44.903 --rc genhtml_legend=1 00:34:44.903 --rc geninfo_all_blocks=1 00:34:44.903 --rc geninfo_unexecuted_blocks=1 00:34:44.903 00:34:44.903 ' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:44.903 12:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.042 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:53.043 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:53.043 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:53.043 Found net devices under 0000:31:00.0: cvl_0_0 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:53.043 Found net devices under 0000:31:00.1: cvl_0_1 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:53.043 12:11:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:53.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:34:53.043 00:34:53.043 --- 10.0.0.2 ping statistics --- 00:34:53.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.043 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:34:53.043 00:34:53.043 --- 10.0.0.1 ping statistics --- 00:34:53.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.043 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=2213319 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 2213319 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2213319 ']' 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:53.043 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.044 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:53.044 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.044 [2024-10-11 12:11:55.163552] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:53.044 [2024-10-11 12:11:55.164675] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:53.044 [2024-10-11 12:11:55.164722] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.044 [2024-10-11 12:11:55.253300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:53.044 [2024-10-11 12:11:55.304953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.044 [2024-10-11 12:11:55.305000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.044 [2024-10-11 12:11:55.305008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.044 [2024-10-11 12:11:55.305015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.044 [2024-10-11 12:11:55.305022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.044 [2024-10-11 12:11:55.306851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.044 [2024-10-11 12:11:55.306855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.044 [2024-10-11 12:11:55.383300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:53.044 [2024-10-11 12:11:55.384094] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:53.044 [2024-10-11 12:11:55.384322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:53.304 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.304 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:53.304 12:11:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:53.304 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:53.304 12:11:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:53.565 5000+0 records in 00:34:53.565 5000+0 records out 00:34:53.565 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0188545 s, 543 MB/s 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.565 AIO0 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.565 [2024-10-11 12:11:56.095885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:53.565 [2024-10-11 12:11:56.140367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2213319 0 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2213319 0 idle 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2213319 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:34:53.565 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213319 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.30 reactor_0' 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213319 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.30 reactor_0 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2213319 1 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2213319 1 idle 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2213319 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213323 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213323 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2213546 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:53.826 12:11:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2213319 0 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2213319 0 busy 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2213319 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213319 root 20 0 128.2g 44928 32256 S 13.3 0.0 0:00.32 reactor_0' 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213319 root 20 0 128.2g 44928 32256 S 13.3 0.0 0:00.32 reactor_0 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:54.087 12:11:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:55.169 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:55.169 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:55.169 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:34:55.169 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213319 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.62 reactor_0' 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213319 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.62 reactor_0 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2213319 1 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2213319 1 busy 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2213319 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:34:55.431 12:11:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213323 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:01.34 reactor_1' 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213323 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:01.34 reactor_1 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:55.431 12:11:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2213546 00:35:05.429 Initializing NVMe Controllers 00:35:05.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:05.429 Controller IO queue size 256, less than required. 00:35:05.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:05.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:05.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:05.429 Initialization complete. Launching workers. 00:35:05.429 ======================================================== 00:35:05.429 Latency(us) 00:35:05.429 Device Information : IOPS MiB/s Average min max 00:35:05.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18583.29 72.59 13780.12 4117.32 32806.60 00:35:05.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19457.49 76.01 13158.29 7812.01 30465.54 00:35:05.429 ======================================================== 00:35:05.429 Total : 38040.79 148.60 13462.06 4117.32 32806.60 00:35:05.429 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2213319 0 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2213319 0 idle 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2213319 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213319 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.31 reactor_0' 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213319 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.31 reactor_0 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2213319 1 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2213319 1 idle 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2213319 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:35:05.429 12:12:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213323 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213323 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:35:05.429 12:12:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2213319 0 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2213319 0 idle 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2213319 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:35:07.343 12:12:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213319 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.71 reactor_0' 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213319 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.71 reactor_0 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2213319 1 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2213319 1 idle 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2213319 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2213319 -w 256 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2213323 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2213323 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:07.604 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:07.605 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:07.605 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:07.605 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:07.605 12:12:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:07.605 12:12:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:07.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:07.866 rmmod nvme_tcp 00:35:07.866 rmmod nvme_fabrics 00:35:07.866 rmmod nvme_keyring 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 2213319 ']' 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 2213319 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2213319 ']' 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 2213319 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.866 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2213319 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2213319' 00:35:08.127 killing process with pid 2213319 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 2213319 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 2213319 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:08.127 12:12:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.674 12:12:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:10.674 00:35:10.674 real 0m25.689s 00:35:10.674 user 0m40.347s 00:35:10.674 sys 0m10.073s 00:35:10.674 12:12:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:10.674 12:12:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:10.674 ************************************ 00:35:10.674 END TEST nvmf_interrupt 00:35:10.674 ************************************ 00:35:10.674 00:35:10.674 real 30m3.147s 00:35:10.674 user 60m58.509s 00:35:10.674 sys 10m20.878s 00:35:10.674 12:12:12 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:10.674 12:12:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:10.674 ************************************ 00:35:10.674 END TEST nvmf_tcp 00:35:10.674 ************************************ 00:35:10.674 12:12:12 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:10.674 12:12:12 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:10.674 12:12:12 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:10.674 12:12:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:10.674 12:12:12 -- common/autotest_common.sh@10 -- # set +x 00:35:10.674 ************************************ 00:35:10.674 START TEST spdkcli_nvmf_tcp 00:35:10.674 ************************************ 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:10.674 * Looking for test storage... 00:35:10.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:10.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.674 --rc genhtml_branch_coverage=1 00:35:10.674 --rc genhtml_function_coverage=1 00:35:10.674 --rc genhtml_legend=1 00:35:10.674 --rc geninfo_all_blocks=1 00:35:10.674 --rc geninfo_unexecuted_blocks=1 00:35:10.674 00:35:10.674 ' 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:10.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.674 --rc genhtml_branch_coverage=1 00:35:10.674 --rc genhtml_function_coverage=1 00:35:10.674 --rc genhtml_legend=1 00:35:10.674 --rc geninfo_all_blocks=1 00:35:10.674 --rc geninfo_unexecuted_blocks=1 00:35:10.674 00:35:10.674 ' 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:10.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.674 --rc genhtml_branch_coverage=1 00:35:10.674 --rc genhtml_function_coverage=1 00:35:10.674 --rc genhtml_legend=1 00:35:10.674 --rc geninfo_all_blocks=1 00:35:10.674 --rc geninfo_unexecuted_blocks=1 00:35:10.674 00:35:10.674 ' 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:10.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.674 --rc genhtml_branch_coverage=1 00:35:10.674 --rc genhtml_function_coverage=1 00:35:10.674 --rc genhtml_legend=1 00:35:10.674 --rc geninfo_all_blocks=1 00:35:10.674 --rc geninfo_unexecuted_blocks=1 00:35:10.674 00:35:10.674 ' 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.674 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:10.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2217433 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2217433 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2217433 ']' 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:10.675 12:12:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:10.675 [2024-10-11 12:12:13.294130] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:35:10.675 [2024-10-11 12:12:13.294204] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217433 ] 00:35:10.935 [2024-10-11 12:12:13.377718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:10.936 [2024-10-11 12:12:13.430581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.936 [2024-10-11 12:12:13.430587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:11.507 12:12:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:11.507 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:11.507 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:11.507 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:11.507 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:11.507 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:11.507 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:11.507 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:11.507 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:11.507 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:11.507 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:11.507 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:11.507 ' 00:35:14.806 [2024-10-11 12:12:16.829184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.746 [2024-10-11 12:12:18.189406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:18.286 [2024-10-11 12:12:20.712662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:20.827 [2024-10-11 12:12:22.914881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:22.208 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:22.208 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:22.208 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:22.208 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:22.208 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:22.208 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:22.208 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:22.208 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:22.208 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:22.208 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:22.208 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:22.208 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:22.208 12:12:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:22.208 12:12:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:22.208 12:12:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:22.208 12:12:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:22.208 12:12:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:22.208 12:12:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:22.208 12:12:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:22.208 12:12:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:22.469 12:12:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:22.729 12:12:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:22.729 12:12:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:22.729 12:12:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:22.729 12:12:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:22.729 12:12:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:22.729 12:12:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:22.729 12:12:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:22.729 12:12:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:22.729 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:22.729 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:22.729 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:22.729 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:22.729 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:22.729 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:22.729 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:22.729 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:22.729 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:22.729 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:22.729 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:22.729 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:22.729 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:22.729 ' 00:35:29.314 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:29.314 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:29.314 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:29.314 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:29.314 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:29.314 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:29.314 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:29.314 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:29.314 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:29.314 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:29.314 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:29.314 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:29.314 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:29.314 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2217433 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2217433 ']' 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2217433 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:29.314 12:12:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2217433 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2217433' 00:35:29.314 killing process with pid 2217433 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2217433 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2217433 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2217433 ']' 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2217433 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2217433 ']' 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2217433 00:35:29.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2217433) - No such process 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2217433 is not found' 00:35:29.314 Process with pid 2217433 is not found 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:29.314 00:35:29.314 real 0m18.098s 00:35:29.314 user 0m40.215s 00:35:29.314 sys 0m0.858s 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:29.314 12:12:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.314 ************************************ 00:35:29.314 END TEST spdkcli_nvmf_tcp 00:35:29.314 ************************************ 00:35:29.314 12:12:31 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:29.314 12:12:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:29.314 12:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:29.314 12:12:31 -- common/autotest_common.sh@10 -- # set +x 00:35:29.314 ************************************ 00:35:29.314 START TEST nvmf_identify_passthru 00:35:29.314 ************************************ 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:29.314 * Looking for test storage... 00:35:29.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:29.314 12:12:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.314 --rc genhtml_branch_coverage=1 00:35:29.314 --rc genhtml_function_coverage=1 00:35:29.314 --rc genhtml_legend=1 00:35:29.314 --rc geninfo_all_blocks=1 00:35:29.314 --rc geninfo_unexecuted_blocks=1 00:35:29.314 00:35:29.314 ' 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.314 --rc genhtml_branch_coverage=1 00:35:29.314 --rc genhtml_function_coverage=1 00:35:29.314 --rc genhtml_legend=1 00:35:29.314 --rc geninfo_all_blocks=1 00:35:29.314 --rc geninfo_unexecuted_blocks=1 00:35:29.314 00:35:29.314 ' 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.314 --rc genhtml_branch_coverage=1 00:35:29.314 --rc genhtml_function_coverage=1 00:35:29.314 --rc genhtml_legend=1 00:35:29.314 --rc geninfo_all_blocks=1 00:35:29.314 --rc geninfo_unexecuted_blocks=1 00:35:29.314 00:35:29.314 ' 00:35:29.314 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:29.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.314 --rc genhtml_branch_coverage=1 00:35:29.314 --rc genhtml_function_coverage=1 00:35:29.314 --rc genhtml_legend=1 00:35:29.314 --rc geninfo_all_blocks=1 00:35:29.314 --rc geninfo_unexecuted_blocks=1 00:35:29.315 00:35:29.315 ' 00:35:29.315 12:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.315 12:12:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.315 12:12:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.315 12:12:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.315 12:12:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:29.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:29.315 12:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.315 12:12:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.315 12:12:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.315 12:12:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.315 12:12:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:29.315 12:12:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.315 12:12:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.315 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.315 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:29.315 12:12:31 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:29.315 12:12:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:37.460 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:37.460 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:37.460 Found net devices under 0000:31:00.0: cvl_0_0 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:37.460 Found net devices under 0000:31:00.1: cvl_0_1 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:37.460 12:12:38 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.460 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:37.460 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:37.460 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:37.460 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:37.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:35:37.460 00:35:37.460 --- 10.0.0.2 ping statistics --- 00:35:37.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.460 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:35:37.460 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:37.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:35:37.461 00:35:37.461 --- 10.0.0.1 ping statistics --- 00:35:37.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.461 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:37.461 12:12:39 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:37.461 12:12:39 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:37.461 12:12:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:37.722 12:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:37.722 12:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:37.722 12:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:37.722 12:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2224911 00:35:37.722 12:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:37.722 12:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:37.722 12:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2224911 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2224911 ']' 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:37.722 12:12:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:37.722 [2024-10-11 12:12:40.312109] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:35:37.722 [2024-10-11 12:12:40.312166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.722 [2024-10-11 12:12:40.398743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:37.983 [2024-10-11 12:12:40.440250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.983 [2024-10-11 12:12:40.440290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.983 [2024-10-11 12:12:40.440298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.983 [2024-10-11 12:12:40.440304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.983 [2024-10-11 12:12:40.440311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.983 [2024-10-11 12:12:40.442290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.983 [2024-10-11 12:12:40.442526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.983 [2024-10-11 12:12:40.442666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.983 [2024-10-11 12:12:40.442667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.555 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:35:38.556 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:38.556 INFO: Log level set to 20 00:35:38.556 INFO: Requests: 00:35:38.556 { 00:35:38.556 "jsonrpc": "2.0", 00:35:38.556 "method": "nvmf_set_config", 00:35:38.556 "id": 1, 00:35:38.556 "params": { 00:35:38.556 "admin_cmd_passthru": { 00:35:38.556 "identify_ctrlr": true 00:35:38.556 } 00:35:38.556 } 00:35:38.556 } 00:35:38.556 00:35:38.556 INFO: response: 00:35:38.556 { 00:35:38.556 "jsonrpc": "2.0", 00:35:38.556 "id": 1, 00:35:38.556 "result": true 00:35:38.556 } 00:35:38.556 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.556 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:38.556 INFO: Setting log level to 20 00:35:38.556 INFO: Setting log level to 20 00:35:38.556 INFO: Log level set to 20 00:35:38.556 INFO: Log level set to 20 00:35:38.556 INFO: Requests: 00:35:38.556 { 00:35:38.556 "jsonrpc": "2.0", 00:35:38.556 "method": "framework_start_init", 00:35:38.556 "id": 1 00:35:38.556 } 00:35:38.556 00:35:38.556 INFO: Requests: 00:35:38.556 { 00:35:38.556 "jsonrpc": "2.0", 00:35:38.556 "method": "framework_start_init", 00:35:38.556 "id": 1 00:35:38.556 } 00:35:38.556 00:35:38.556 [2024-10-11 12:12:41.225473] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:38.556 INFO: response: 00:35:38.556 { 00:35:38.556 "jsonrpc": "2.0", 00:35:38.556 "id": 1, 00:35:38.556 "result": true 00:35:38.556 } 00:35:38.556 00:35:38.556 INFO: response: 00:35:38.556 { 00:35:38.556 "jsonrpc": "2.0", 00:35:38.556 "id": 1, 00:35:38.556 "result": true 00:35:38.556 } 00:35:38.556 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.556 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:38.556 INFO: Setting log level to 40 00:35:38.556 INFO: Setting log level to 40 00:35:38.556 INFO: Setting log level to 40 00:35:38.556 [2024-10-11 12:12:41.239037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.556 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:38.556 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:38.817 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:38.817 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.817 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.078 Nvme0n1 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.078 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.078 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.078 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.078 [2024-10-11 12:12:41.630181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.078 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.078 [ 00:35:39.078 { 00:35:39.078 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:39.078 "subtype": "Discovery", 00:35:39.078 "listen_addresses": [], 00:35:39.078 "allow_any_host": true, 00:35:39.078 "hosts": [] 00:35:39.078 }, 00:35:39.078 { 00:35:39.078 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.078 "subtype": "NVMe", 00:35:39.078 "listen_addresses": [ 00:35:39.078 { 00:35:39.078 "trtype": "TCP", 00:35:39.078 "adrfam": "IPv4", 00:35:39.078 "traddr": "10.0.0.2", 00:35:39.078 "trsvcid": "4420" 00:35:39.078 } 00:35:39.078 ], 00:35:39.078 "allow_any_host": true, 00:35:39.078 "hosts": [], 00:35:39.078 "serial_number": "SPDK00000000000001", 00:35:39.078 "model_number": "SPDK bdev Controller", 00:35:39.078 "max_namespaces": 1, 00:35:39.078 "min_cntlid": 1, 00:35:39.078 "max_cntlid": 65519, 00:35:39.078 "namespaces": [ 00:35:39.078 { 00:35:39.078 "nsid": 1, 00:35:39.078 "bdev_name": "Nvme0n1", 00:35:39.078 "name": "Nvme0n1", 00:35:39.078 "nguid": "3634473052605494002538450000002B", 00:35:39.078 "uuid": "36344730-5260-5494-0025-38450000002b" 00:35:39.078 } 00:35:39.078 ] 00:35:39.078 } 00:35:39.078 ] 00:35:39.078 12:12:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.078 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:39.078 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:39.078 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:39.339 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:35:39.339 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:39.339 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:39.339 12:12:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:39.599 12:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:39.599 12:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:35:39.599 12:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:39.599 12:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:39.599 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.600 12:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:39.600 12:12:42 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:39.600 rmmod nvme_tcp 00:35:39.600 rmmod nvme_fabrics 00:35:39.600 rmmod nvme_keyring 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 2224911 ']' 00:35:39.600 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 2224911 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2224911 ']' 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2224911 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2224911 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2224911' 00:35:39.600 killing process with pid 2224911 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2224911 00:35:39.600 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2224911 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:39.860 12:12:42 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.860 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:39.860 12:12:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.405 12:12:44 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:42.405 00:35:42.405 real 0m13.384s 00:35:42.405 user 0m10.671s 00:35:42.405 sys 0m6.644s 00:35:42.405 12:12:44 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:42.405 12:12:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:42.405 ************************************ 00:35:42.405 END TEST nvmf_identify_passthru 00:35:42.405 ************************************ 00:35:42.405 12:12:44 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:42.405 12:12:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:42.405 12:12:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:42.405 12:12:44 -- common/autotest_common.sh@10 -- # set +x 00:35:42.405 ************************************ 00:35:42.405 START TEST nvmf_dif 00:35:42.405 ************************************ 00:35:42.405 12:12:44 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:42.405 * Looking for test storage... 00:35:42.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:42.405 12:12:44 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:42.405 12:12:44 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:42.405 12:12:44 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:42.405 12:12:44 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.405 12:12:44 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:42.406 12:12:44 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.406 12:12:44 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:42.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.406 --rc genhtml_branch_coverage=1 00:35:42.406 --rc genhtml_function_coverage=1 00:35:42.406 --rc genhtml_legend=1 00:35:42.406 --rc geninfo_all_blocks=1 00:35:42.406 --rc geninfo_unexecuted_blocks=1 00:35:42.406 00:35:42.406 ' 00:35:42.406 12:12:44 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:42.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.406 --rc genhtml_branch_coverage=1 00:35:42.406 --rc genhtml_function_coverage=1 00:35:42.406 --rc genhtml_legend=1 00:35:42.406 --rc geninfo_all_blocks=1 00:35:42.406 --rc geninfo_unexecuted_blocks=1 00:35:42.406 00:35:42.406 ' 00:35:42.406 12:12:44 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:42.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.406 --rc genhtml_branch_coverage=1 00:35:42.406 --rc genhtml_function_coverage=1 00:35:42.406 --rc genhtml_legend=1 00:35:42.406 --rc geninfo_all_blocks=1 00:35:42.406 --rc geninfo_unexecuted_blocks=1 00:35:42.406 00:35:42.406 ' 00:35:42.406 12:12:44 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:42.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.406 --rc genhtml_branch_coverage=1 00:35:42.406 --rc genhtml_function_coverage=1 00:35:42.406 --rc genhtml_legend=1 00:35:42.406 --rc geninfo_all_blocks=1 00:35:42.406 --rc geninfo_unexecuted_blocks=1 00:35:42.406 00:35:42.406 ' 00:35:42.406 12:12:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.406 12:12:44 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.406 12:12:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.406 12:12:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.406 12:12:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.406 12:12:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:42.406 12:12:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:42.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.406 12:12:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:42.406 12:12:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:42.406 12:12:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:42.406 12:12:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:42.406 12:12:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.406 12:12:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:42.406 12:12:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:42.406 12:12:44 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:42.406 12:12:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:50.546 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:50.546 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:50.546 Found net devices under 0000:31:00.0: cvl_0_0 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:50.546 Found net devices under 0000:31:00.1: cvl_0_1 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:50.546 12:12:51 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:50.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:50.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:35:50.547 00:35:50.547 --- 10.0.0.2 ping statistics --- 00:35:50.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.547 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:50.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:50.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:35:50.547 00:35:50.547 --- 10.0.0.1 ping statistics --- 00:35:50.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.547 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:50.547 12:12:52 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:53.092 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:53.092 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:53.092 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:53.354 12:12:56 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:53.354 12:12:56 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:53.354 12:12:56 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:53.354 12:12:56 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:53.354 12:12:56 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:53.354 12:12:56 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:53.616 12:12:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:53.616 12:12:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:53.616 12:12:56 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:53.616 12:12:56 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:53.616 12:12:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:53.616 12:12:56 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=2231087 00:35:53.616 12:12:56 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 2231087 00:35:53.616 12:12:56 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:53.616 12:12:56 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2231087 ']' 00:35:53.616 12:12:56 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.616 12:12:56 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:53.616 12:12:56 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.616 12:12:56 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:53.616 12:12:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:53.616 [2024-10-11 12:12:56.151250] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:35:53.616 [2024-10-11 12:12:56.151300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.616 [2024-10-11 12:12:56.235233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.616 [2024-10-11 12:12:56.271384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.616 [2024-10-11 12:12:56.271414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.616 [2024-10-11 12:12:56.271422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.616 [2024-10-11 12:12:56.271428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.616 [2024-10-11 12:12:56.271434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.616 [2024-10-11 12:12:56.272044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:54.557 12:12:56 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:54.557 12:12:56 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:54.557 12:12:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:54.557 12:12:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:54.557 [2024-10-11 12:12:56.980485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.557 12:12:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.557 12:12:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:54.557 ************************************ 00:35:54.557 START TEST fio_dif_1_default 00:35:54.557 ************************************ 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:54.557 bdev_null0 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:54.557 [2024-10-11 12:12:57.068924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:54.557 { 00:35:54.557 "params": { 00:35:54.557 "name": "Nvme$subsystem", 00:35:54.557 "trtype": "$TEST_TRANSPORT", 00:35:54.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.557 "adrfam": "ipv4", 00:35:54.557 "trsvcid": "$NVMF_PORT", 00:35:54.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.557 "hdgst": ${hdgst:-false}, 00:35:54.557 "ddgst": ${ddgst:-false} 00:35:54.557 }, 00:35:54.557 "method": "bdev_nvme_attach_controller" 00:35:54.557 } 00:35:54.557 EOF 00:35:54.557 )") 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:54.557 "params": { 00:35:54.557 "name": "Nvme0", 00:35:54.557 "trtype": "tcp", 00:35:54.557 "traddr": "10.0.0.2", 00:35:54.557 "adrfam": "ipv4", 00:35:54.557 "trsvcid": "4420", 00:35:54.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.557 "hdgst": false, 00:35:54.557 "ddgst": false 00:35:54.557 }, 00:35:54.557 "method": "bdev_nvme_attach_controller" 00:35:54.557 }' 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:54.557 12:12:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.818 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:54.818 fio-3.35 00:35:54.818 Starting 1 thread 00:36:07.048 00:36:07.048 filename0: (groupid=0, jobs=1): err= 0: pid=2231665: Fri Oct 11 12:13:08 2024 00:36:07.048 read: IOPS=98, BW=393KiB/s (402kB/s)(3936KiB/10020msec) 00:36:07.048 slat (nsec): min=5648, max=75209, avg=6548.62, stdev=2758.04 00:36:07.048 clat (usec): min=797, max=44879, avg=40713.51, stdev=3627.65 00:36:07.048 lat (usec): min=802, max=44918, avg=40720.06, stdev=3627.81 00:36:07.048 clat percentiles (usec): 00:36:07.048 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:07.048 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:07.048 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:07.048 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:36:07.048 | 99.99th=[44827] 00:36:07.048 bw ( KiB/s): min= 384, max= 416, per=99.79%, avg=392.00, stdev=14.22, samples=20 00:36:07.048 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:36:07.048 lat (usec) : 1000=0.81% 00:36:07.048 lat (msec) : 50=99.19% 00:36:07.048 cpu : usr=93.93%, sys=5.83%, ctx=14, majf=0, minf=295 00:36:07.048 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:07.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.048 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.048 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:07.048 00:36:07.048 Run status group 0 (all jobs): 00:36:07.048 READ: bw=393KiB/s (402kB/s), 393KiB/s-393KiB/s (402kB/s-402kB/s), io=3936KiB (4030kB), run=10020-10020msec 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 00:36:07.048 real 0m11.290s 00:36:07.048 user 0m18.418s 00:36:07.048 sys 0m1.033s 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 ************************************ 00:36:07.048 END TEST fio_dif_1_default 00:36:07.048 ************************************ 00:36:07.048 12:13:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:07.048 12:13:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:07.048 12:13:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 ************************************ 00:36:07.048 START TEST fio_dif_1_multi_subsystems 00:36:07.048 ************************************ 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 bdev_null0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 [2024-10-11 12:13:08.443287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 bdev_null1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:07.048 { 00:36:07.048 "params": { 00:36:07.048 "name": "Nvme$subsystem", 00:36:07.048 "trtype": "$TEST_TRANSPORT", 00:36:07.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.048 "adrfam": "ipv4", 00:36:07.048 "trsvcid": "$NVMF_PORT", 00:36:07.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.048 "hdgst": ${hdgst:-false}, 00:36:07.048 "ddgst": ${ddgst:-false} 00:36:07.048 }, 00:36:07.048 "method": "bdev_nvme_attach_controller" 00:36:07.048 } 00:36:07.048 EOF 00:36:07.048 )") 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:07.048 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:07.049 { 00:36:07.049 "params": { 00:36:07.049 "name": "Nvme$subsystem", 00:36:07.049 "trtype": "$TEST_TRANSPORT", 00:36:07.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.049 "adrfam": "ipv4", 00:36:07.049 "trsvcid": "$NVMF_PORT", 00:36:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.049 "hdgst": ${hdgst:-false}, 00:36:07.049 "ddgst": ${ddgst:-false} 00:36:07.049 }, 00:36:07.049 "method": "bdev_nvme_attach_controller" 00:36:07.049 } 00:36:07.049 EOF 00:36:07.049 )") 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:07.049 "params": { 00:36:07.049 "name": "Nvme0", 00:36:07.049 "trtype": "tcp", 00:36:07.049 "traddr": "10.0.0.2", 00:36:07.049 "adrfam": "ipv4", 00:36:07.049 "trsvcid": "4420", 00:36:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:07.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:07.049 "hdgst": false, 00:36:07.049 "ddgst": false 00:36:07.049 }, 00:36:07.049 "method": "bdev_nvme_attach_controller" 00:36:07.049 },{ 00:36:07.049 "params": { 00:36:07.049 "name": "Nvme1", 00:36:07.049 "trtype": "tcp", 00:36:07.049 "traddr": "10.0.0.2", 00:36:07.049 "adrfam": "ipv4", 00:36:07.049 "trsvcid": "4420", 00:36:07.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:07.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:07.049 "hdgst": false, 00:36:07.049 "ddgst": false 00:36:07.049 }, 00:36:07.049 "method": "bdev_nvme_attach_controller" 00:36:07.049 }' 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:07.049 12:13:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.049 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:07.049 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:07.049 fio-3.35 00:36:07.049 Starting 2 threads 00:36:17.118 00:36:17.118 filename0: (groupid=0, jobs=1): err= 0: pid=2233916: Fri Oct 11 12:13:19 2024 00:36:17.118 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10001msec) 00:36:17.119 slat (nsec): min=5713, max=29358, avg=6654.50, stdev=1682.15 00:36:17.119 clat (usec): min=554, max=42604, avg=20947.80, stdev=20249.06 00:36:17.119 lat (usec): min=560, max=42627, avg=20954.46, stdev=20249.04 00:36:17.119 clat percentiles (usec): 00:36:17.119 | 1.00th=[ 594], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 717], 00:36:17.119 | 30.00th=[ 725], 40.00th=[ 824], 50.00th=[ 1958], 60.00th=[41157], 00:36:17.119 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:17.119 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:36:17.119 | 99.99th=[42730] 00:36:17.119 bw ( KiB/s): min= 736, max= 768, per=50.40%, avg=764.63, stdev=10.09, samples=19 00:36:17.119 iops : min= 184, max= 192, avg=191.16, stdev= 2.52, samples=19 00:36:17.119 lat (usec) : 750=36.01%, 1000=13.89% 00:36:17.119 lat (msec) : 2=0.21%, 50=49.90% 00:36:17.119 cpu : usr=95.75%, sys=4.05%, ctx=14, majf=0, minf=104 00:36:17.119 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.119 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.119 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:17.119 filename1: (groupid=0, jobs=1): err= 0: pid=2233917: Fri Oct 11 12:13:19 2024 00:36:17.119 read: IOPS=188, BW=756KiB/s (774kB/s)(7584KiB/10037msec) 00:36:17.119 slat (nsec): min=5691, max=28749, avg=6488.93, stdev=1365.58 00:36:17.119 clat (usec): min=523, max=42723, avg=21156.85, stdev=20161.71 00:36:17.119 lat (usec): min=531, max=42751, avg=21163.34, stdev=20161.68 00:36:17.119 clat percentiles (usec): 00:36:17.119 | 1.00th=[ 594], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 840], 00:36:17.119 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:36:17.119 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:17.119 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:36:17.119 | 99.99th=[42730] 00:36:17.119 bw ( KiB/s): min= 672, max= 768, per=49.87%, avg=756.80, stdev=28.00, samples=20 00:36:17.119 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:36:17.119 lat (usec) : 750=2.80%, 1000=46.78% 00:36:17.119 lat (msec) : 50=50.42% 00:36:17.119 cpu : usr=95.90%, sys=3.90%, ctx=14, majf=0, minf=163 00:36:17.119 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.119 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.119 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:17.119 00:36:17.119 Run status group 0 (all jobs): 00:36:17.119 READ: bw=1516KiB/s (1552kB/s), 756KiB/s-763KiB/s (774kB/s-781kB/s), io=14.9MiB (15.6MB), run=10001-10037msec 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.119 00:36:17.119 real 0m11.305s 00:36:17.119 user 0m37.726s 00:36:17.119 sys 0m1.151s 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 ************************************ 00:36:17.119 END TEST fio_dif_1_multi_subsystems 00:36:17.119 ************************************ 00:36:17.119 12:13:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:17.119 12:13:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:17.119 12:13:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 ************************************ 00:36:17.119 START TEST fio_dif_rand_params 00:36:17.119 ************************************ 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 bdev_null0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.119 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:17.381 [2024-10-11 12:13:19.827808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:17.381 { 00:36:17.381 "params": { 00:36:17.381 "name": "Nvme$subsystem", 00:36:17.381 "trtype": "$TEST_TRANSPORT", 00:36:17.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.381 "adrfam": "ipv4", 00:36:17.381 "trsvcid": "$NVMF_PORT", 00:36:17.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.381 "hdgst": ${hdgst:-false}, 00:36:17.381 "ddgst": ${ddgst:-false} 00:36:17.381 }, 00:36:17.381 "method": "bdev_nvme_attach_controller" 00:36:17.381 } 00:36:17.381 EOF 00:36:17.381 )") 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:17.381 "params": { 00:36:17.381 "name": "Nvme0", 00:36:17.381 "trtype": "tcp", 00:36:17.381 "traddr": "10.0.0.2", 00:36:17.381 "adrfam": "ipv4", 00:36:17.381 "trsvcid": "4420", 00:36:17.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.381 "hdgst": false, 00:36:17.381 "ddgst": false 00:36:17.381 }, 00:36:17.381 "method": "bdev_nvme_attach_controller" 00:36:17.381 }' 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:17.381 12:13:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.642 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:17.642 ... 00:36:17.642 fio-3.35 00:36:17.642 Starting 3 threads 00:36:24.226 00:36:24.226 filename0: (groupid=0, jobs=1): err= 0: pid=2236142: Fri Oct 11 12:13:25 2024 00:36:24.226 read: IOPS=366, BW=45.8MiB/s (48.0MB/s)(231MiB/5045msec) 00:36:24.226 slat (nsec): min=5859, max=56402, avg=8479.47, stdev=2069.10 00:36:24.226 clat (usec): min=4068, max=48818, avg=8152.29, stdev=3701.14 00:36:24.226 lat (usec): min=4076, max=48824, avg=8160.77, stdev=3701.14 00:36:24.226 clat percentiles (usec): 00:36:24.226 | 1.00th=[ 4883], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6652], 00:36:24.226 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8094], 00:36:24.226 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[10290], 00:36:24.226 | 99.00th=[11469], 99.50th=[46400], 99.90th=[48497], 99.95th=[49021], 00:36:24.226 | 99.99th=[49021] 00:36:24.226 bw ( KiB/s): min=42496, max=53248, per=41.60%, avg=47257.60, stdev=3479.26, samples=10 00:36:24.226 iops : min= 332, max= 416, avg=369.20, stdev=27.18, samples=10 00:36:24.226 lat (msec) : 10=90.91%, 20=8.33%, 50=0.76% 00:36:24.226 cpu : usr=94.33%, sys=5.45%, ctx=7, majf=0, minf=128 00:36:24.226 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:24.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.226 issued rwts: total=1849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:24.226 filename0: (groupid=0, jobs=1): err= 0: pid=2236143: Fri Oct 11 12:13:25 2024 00:36:24.226 read: IOPS=332, BW=41.6MiB/s (43.6MB/s)(208MiB/5014msec) 00:36:24.226 slat (nsec): min=8255, max=48818, avg=9293.29, stdev=2084.39 00:36:24.226 clat (usec): min=4686, max=88324, avg=9010.52, stdev=6212.59 00:36:24.226 lat (usec): min=4695, max=88334, avg=9019.81, stdev=6212.74 00:36:24.226 clat percentiles (usec): 00:36:24.226 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 7046], 00:36:24.226 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8455], 00:36:24.226 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10814], 00:36:24.226 | 99.00th=[47449], 99.50th=[48497], 99.90th=[88605], 99.95th=[88605], 00:36:24.226 | 99.99th=[88605] 00:36:24.226 bw ( KiB/s): min=34304, max=48384, per=37.50%, avg=42598.40, stdev=4497.97, samples=10 00:36:24.226 iops : min= 268, max= 378, avg=332.80, stdev=35.14, samples=10 00:36:24.226 lat (msec) : 10=85.12%, 20=12.84%, 50=1.92%, 100=0.12% 00:36:24.226 cpu : usr=91.96%, sys=6.80%, ctx=267, majf=0, minf=78 00:36:24.226 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:24.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.226 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:24.226 filename0: (groupid=0, jobs=1): err= 0: pid=2236144: Fri Oct 11 12:13:25 2024 00:36:24.226 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(120MiB/5046msec) 00:36:24.226 slat (nsec): min=5746, max=35655, avg=8421.89, stdev=1879.73 00:36:24.226 clat (usec): min=5063, max=91285, avg=15633.12, stdev=16987.96 00:36:24.226 lat (usec): min=5069, max=91294, avg=15641.54, stdev=16987.94 00:36:24.226 clat percentiles (usec): 00:36:24.226 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7242], 00:36:24.226 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8586], 00:36:24.226 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[48497], 95.00th=[49021], 00:36:24.226 | 99.00th=[87557], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:36:24.226 | 99.99th=[91751] 00:36:24.226 bw ( KiB/s): min=20480, max=30208, per=21.66%, avg=24601.60, stdev=3845.59, samples=10 00:36:24.226 iops : min= 160, max= 236, avg=192.20, stdev=30.04, samples=10 00:36:24.226 lat (msec) : 10=78.90%, 20=3.33%, 50=15.80%, 100=1.98% 00:36:24.226 cpu : usr=95.52%, sys=4.24%, ctx=9, majf=0, minf=78 00:36:24.226 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:24.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.226 issued rwts: total=962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:24.226 00:36:24.226 Run status group 0 (all jobs): 00:36:24.226 READ: bw=111MiB/s (116MB/s), 23.8MiB/s-45.8MiB/s (25.0MB/s-48.0MB/s), io=560MiB (587MB), run=5014-5046msec 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.226 bdev_null0 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.226 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 [2024-10-11 12:13:26.116312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 bdev_null1 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 bdev_null2 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:24.227 { 00:36:24.227 "params": { 00:36:24.227 "name": "Nvme$subsystem", 00:36:24.227 "trtype": "$TEST_TRANSPORT", 00:36:24.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.227 "adrfam": "ipv4", 00:36:24.227 "trsvcid": "$NVMF_PORT", 00:36:24.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.227 "hdgst": ${hdgst:-false}, 00:36:24.227 "ddgst": ${ddgst:-false} 00:36:24.227 }, 00:36:24.227 "method": "bdev_nvme_attach_controller" 00:36:24.227 } 00:36:24.227 EOF 00:36:24.227 )") 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:24.227 { 00:36:24.227 "params": { 00:36:24.227 "name": "Nvme$subsystem", 00:36:24.227 "trtype": "$TEST_TRANSPORT", 00:36:24.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.227 "adrfam": "ipv4", 00:36:24.227 "trsvcid": "$NVMF_PORT", 00:36:24.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.227 "hdgst": ${hdgst:-false}, 00:36:24.227 "ddgst": ${ddgst:-false} 00:36:24.227 }, 00:36:24.227 "method": "bdev_nvme_attach_controller" 00:36:24.227 } 00:36:24.227 EOF 00:36:24.227 )") 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:24.227 { 00:36:24.227 "params": { 00:36:24.227 "name": "Nvme$subsystem", 00:36:24.227 "trtype": "$TEST_TRANSPORT", 00:36:24.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.227 "adrfam": "ipv4", 00:36:24.227 "trsvcid": "$NVMF_PORT", 00:36:24.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.227 "hdgst": ${hdgst:-false}, 00:36:24.227 "ddgst": ${ddgst:-false} 00:36:24.227 }, 00:36:24.227 "method": "bdev_nvme_attach_controller" 00:36:24.227 } 00:36:24.227 EOF 00:36:24.227 )") 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:24.227 12:13:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:24.227 "params": { 00:36:24.227 "name": "Nvme0", 00:36:24.227 "trtype": "tcp", 00:36:24.227 "traddr": "10.0.0.2", 00:36:24.227 "adrfam": "ipv4", 00:36:24.227 "trsvcid": "4420", 00:36:24.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.227 "hdgst": false, 00:36:24.227 "ddgst": false 00:36:24.227 }, 00:36:24.227 "method": "bdev_nvme_attach_controller" 00:36:24.227 },{ 00:36:24.227 "params": { 00:36:24.227 "name": "Nvme1", 00:36:24.227 "trtype": "tcp", 00:36:24.227 "traddr": "10.0.0.2", 00:36:24.227 "adrfam": "ipv4", 00:36:24.227 "trsvcid": "4420", 00:36:24.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:24.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:24.227 "hdgst": false, 00:36:24.227 "ddgst": false 00:36:24.227 }, 00:36:24.227 "method": "bdev_nvme_attach_controller" 00:36:24.227 },{ 00:36:24.228 "params": { 00:36:24.228 "name": "Nvme2", 00:36:24.228 "trtype": "tcp", 00:36:24.228 "traddr": "10.0.0.2", 00:36:24.228 "adrfam": "ipv4", 00:36:24.228 "trsvcid": "4420", 00:36:24.228 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:24.228 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:24.228 "hdgst": false, 00:36:24.228 "ddgst": false 00:36:24.228 }, 00:36:24.228 "method": "bdev_nvme_attach_controller" 00:36:24.228 }' 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:24.228 12:13:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.228 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:24.228 ... 00:36:24.228 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:24.228 ... 00:36:24.228 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:24.228 ... 00:36:24.228 fio-3.35 00:36:24.228 Starting 24 threads 00:36:36.458 00:36:36.458 filename0: (groupid=0, jobs=1): err= 0: pid=2237635: Fri Oct 11 12:13:37 2024 00:36:36.458 read: IOPS=787, BW=3148KiB/s (3224kB/s)(30.8MiB/10032msec) 00:36:36.458 slat (usec): min=5, max=101, avg=10.86, stdev=10.06 00:36:36.458 clat (usec): min=1668, max=44961, avg=20240.46, stdev=5592.24 00:36:36.458 lat (usec): min=1678, max=44968, avg=20251.32, stdev=5593.86 00:36:36.458 clat percentiles (usec): 00:36:36.458 | 1.00th=[ 2024], 5.00th=[13698], 10.00th=[15139], 20.00th=[15926], 00:36:36.458 | 30.00th=[16712], 40.00th=[17957], 50.00th=[19792], 60.00th=[23200], 00:36:36.458 | 70.00th=[23725], 80.00th=[23987], 90.00th=[25297], 95.00th=[28967], 00:36:36.458 | 99.00th=[35390], 99.50th=[36963], 99.90th=[41681], 99.95th=[44827], 00:36:36.458 | 99.99th=[44827] 00:36:36.458 bw ( KiB/s): min= 2794, max= 4576, per=4.86%, avg=3153.80, stdev=387.48, samples=20 00:36:36.458 iops : min= 698, max= 1144, avg=788.40, stdev=96.91, samples=20 00:36:36.458 lat (msec) : 2=0.90%, 4=1.13%, 10=0.68%, 20=47.80%, 50=49.49% 00:36:36.458 cpu : usr=98.94%, sys=0.77%, ctx=14, majf=0, minf=59 00:36:36.458 IO depths : 1=1.0%, 2=2.0%, 4=8.5%, 8=76.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:36:36.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.458 complete : 0=0.0%, 4=89.7%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.458 issued rwts: total=7896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.458 filename0: (groupid=0, jobs=1): err= 0: pid=2237636: Fri Oct 11 12:13:37 2024 00:36:36.458 read: IOPS=665, BW=2662KiB/s (2725kB/s)(26.0MiB/10003msec) 00:36:36.458 slat (nsec): min=5837, max=82492, avg=17552.35, stdev=13306.36 00:36:36.458 clat (usec): min=13230, max=26918, avg=23894.03, stdev=816.40 00:36:36.458 lat (usec): min=13254, max=26967, avg=23911.58, stdev=815.21 00:36:36.458 clat percentiles (usec): 00:36:36.458 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.458 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.458 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.458 | 99.00th=[25297], 99.50th=[25297], 99.90th=[26346], 99.95th=[26346], 00:36:36.458 | 99.99th=[26870] 00:36:36.458 bw ( KiB/s): min= 2554, max= 2693, per=4.10%, avg=2660.68, stdev=54.26, samples=19 00:36:36.458 iops : min= 638, max= 673, avg=665.11, stdev=13.60, samples=19 00:36:36.458 lat (msec) : 20=0.51%, 50=99.49% 00:36:36.458 cpu : usr=98.86%, sys=0.83%, ctx=71, majf=0, minf=41 00:36:36.458 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:36.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.458 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.458 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.458 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.458 filename0: (groupid=0, jobs=1): err= 0: pid=2237637: Fri Oct 11 12:13:37 2024 00:36:36.458 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:36:36.458 slat (nsec): min=5844, max=78032, avg=21846.73, stdev=12702.29 00:36:36.458 clat (usec): min=13287, max=27762, avg=23844.67, stdev=812.15 00:36:36.458 lat (usec): min=13308, max=27773, avg=23866.52, stdev=812.03 00:36:36.458 clat percentiles (usec): 00:36:36.458 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.458 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.458 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.458 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25822], 99.95th=[26870], 00:36:36.458 | 99.99th=[27657] 00:36:36.458 bw ( KiB/s): min= 2554, max= 2693, per=4.10%, avg=2660.68, stdev=54.26, samples=19 00:36:36.459 iops : min= 638, max= 673, avg=665.11, stdev=13.60, samples=19 00:36:36.459 lat (msec) : 20=0.51%, 50=99.49% 00:36:36.459 cpu : usr=98.99%, sys=0.72%, ctx=22, majf=0, minf=31 00:36:36.459 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:36.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.459 filename0: (groupid=0, jobs=1): err= 0: pid=2237638: Fri Oct 11 12:13:37 2024 00:36:36.459 read: IOPS=678, BW=2714KiB/s (2779kB/s)(26.5MiB/10016msec) 00:36:36.459 slat (nsec): min=5830, max=76471, avg=16796.54, stdev=11496.03 00:36:36.459 clat (usec): min=8914, max=41366, avg=23453.02, stdev=4257.29 00:36:36.459 lat (usec): min=8925, max=41373, avg=23469.82, stdev=4259.23 00:36:36.459 clat percentiles (usec): 00:36:36.459 | 1.00th=[13435], 5.00th=[15664], 10.00th=[17695], 20.00th=[20841], 00:36:36.459 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:36.459 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27657], 95.00th=[31327], 00:36:36.459 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40109], 99.95th=[41157], 00:36:36.459 | 99.99th=[41157] 00:36:36.459 bw ( KiB/s): min= 2560, max= 2864, per=4.19%, avg=2715.30, stdev=81.37, samples=20 00:36:36.459 iops : min= 640, max= 716, avg=678.80, stdev=20.37, samples=20 00:36:36.459 lat (msec) : 10=0.06%, 20=17.28%, 50=82.66% 00:36:36.459 cpu : usr=98.59%, sys=1.02%, ctx=103, majf=0, minf=51 00:36:36.459 IO depths : 1=2.7%, 2=5.6%, 4=14.5%, 8=66.8%, 16=10.4%, 32=0.0%, >=64=0.0% 00:36:36.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 issued rwts: total=6795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.459 filename0: (groupid=0, jobs=1): err= 0: pid=2237639: Fri Oct 11 12:13:37 2024 00:36:36.459 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10006msec) 00:36:36.459 slat (nsec): min=5656, max=95850, avg=20221.77, stdev=14027.59 00:36:36.459 clat (usec): min=9098, max=31838, avg=23885.76, stdev=1141.01 00:36:36.459 lat (usec): min=9105, max=31873, avg=23905.98, stdev=1140.33 00:36:36.459 clat percentiles (usec): 00:36:36.459 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.459 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.459 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.459 | 99.00th=[25560], 99.50th=[25822], 99.90th=[31851], 99.95th=[31851], 00:36:36.459 | 99.99th=[31851] 00:36:36.459 bw ( KiB/s): min= 2560, max= 2688, per=4.09%, avg=2653.95, stdev=57.11, samples=19 00:36:36.459 iops : min= 640, max= 672, avg=663.42, stdev=14.27, samples=19 00:36:36.459 lat (msec) : 10=0.09%, 20=0.48%, 50=99.43% 00:36:36.459 cpu : usr=98.45%, sys=1.10%, ctx=145, majf=0, minf=52 00:36:36.459 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:36.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.459 filename0: (groupid=0, jobs=1): err= 0: pid=2237640: Fri Oct 11 12:13:37 2024 00:36:36.459 read: IOPS=725, BW=2902KiB/s (2972kB/s)(28.4MiB/10022msec) 00:36:36.459 slat (usec): min=5, max=107, avg=14.96, stdev=13.23 00:36:36.459 clat (usec): min=9126, max=39107, avg=21926.47, stdev=4719.46 00:36:36.459 lat (usec): min=9134, max=39115, avg=21941.43, stdev=4723.02 00:36:36.459 clat percentiles (usec): 00:36:36.459 | 1.00th=[11731], 5.00th=[14746], 10.00th=[15664], 20.00th=[17171], 00:36:36.459 | 30.00th=[19268], 40.00th=[22676], 50.00th=[23462], 60.00th=[23725], 00:36:36.459 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[30278], 00:36:36.459 | 99.00th=[37487], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:36:36.459 | 99.99th=[39060] 00:36:36.459 bw ( KiB/s): min= 2560, max= 3216, per=4.48%, avg=2904.45, stdev=183.74, samples=20 00:36:36.459 iops : min= 640, max= 804, avg=726.10, stdev=45.93, samples=20 00:36:36.459 lat (msec) : 10=0.37%, 20=32.44%, 50=67.19% 00:36:36.459 cpu : usr=98.89%, sys=0.82%, ctx=16, majf=0, minf=55 00:36:36.459 IO depths : 1=2.2%, 2=4.4%, 4=12.5%, 8=69.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:36:36.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 issued rwts: total=7272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.459 filename0: (groupid=0, jobs=1): err= 0: pid=2237641: Fri Oct 11 12:13:37 2024 00:36:36.459 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10007msec) 00:36:36.459 slat (usec): min=5, max=105, avg=26.07, stdev=15.88 00:36:36.459 clat (usec): min=6083, max=38618, avg=23750.77, stdev=1591.41 00:36:36.459 lat (usec): min=6115, max=38627, avg=23776.85, stdev=1592.42 00:36:36.459 clat percentiles (usec): 00:36:36.459 | 1.00th=[16319], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:36.459 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:36.459 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.459 | 99.00th=[29754], 99.50th=[30802], 99.90th=[35914], 99.95th=[37487], 00:36:36.459 | 99.99th=[38536] 00:36:36.459 bw ( KiB/s): min= 2560, max= 2816, per=4.11%, avg=2667.47, stdev=64.10, samples=19 00:36:36.459 iops : min= 640, max= 704, avg=666.84, stdev=16.02, samples=19 00:36:36.459 lat (msec) : 10=0.03%, 20=2.55%, 50=97.42% 00:36:36.459 cpu : usr=98.68%, sys=1.02%, ctx=19, majf=0, minf=42 00:36:36.459 IO depths : 1=5.5%, 2=11.3%, 4=24.2%, 8=52.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:36.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.459 filename0: (groupid=0, jobs=1): err= 0: pid=2237642: Fri Oct 11 12:13:37 2024 00:36:36.459 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10002msec) 00:36:36.459 slat (usec): min=5, max=102, avg=19.25, stdev=13.77 00:36:36.459 clat (usec): min=7320, max=45333, avg=23796.80, stdev=2463.58 00:36:36.459 lat (usec): min=7327, max=45350, avg=23816.05, stdev=2464.89 00:36:36.459 clat percentiles (usec): 00:36:36.459 | 1.00th=[14877], 5.00th=[20841], 10.00th=[23200], 20.00th=[23462], 00:36:36.459 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.459 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:36.459 | 99.00th=[31327], 99.50th=[33162], 99.90th=[45351], 99.95th=[45351], 00:36:36.459 | 99.99th=[45351] 00:36:36.459 bw ( KiB/s): min= 2432, max= 2810, per=4.10%, avg=2659.32, stdev=88.17, samples=19 00:36:36.459 iops : min= 608, max= 702, avg=664.79, stdev=21.99, samples=19 00:36:36.459 lat (msec) : 10=0.48%, 20=4.01%, 50=95.51% 00:36:36.459 cpu : usr=98.89%, sys=0.80%, ctx=16, majf=0, minf=42 00:36:36.459 IO depths : 1=4.8%, 2=10.0%, 4=21.9%, 8=55.3%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:36.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 issued rwts: total=6684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.459 filename1: (groupid=0, jobs=1): err= 0: pid=2237643: Fri Oct 11 12:13:37 2024 00:36:36.459 read: IOPS=687, BW=2750KiB/s (2815kB/s)(26.9MiB/10012msec) 00:36:36.459 slat (nsec): min=5817, max=73314, avg=17396.31, stdev=12546.18 00:36:36.459 clat (usec): min=1746, max=40537, avg=23117.75, stdev=3665.34 00:36:36.459 lat (usec): min=1785, max=40559, avg=23135.15, stdev=3666.47 00:36:36.459 clat percentiles (usec): 00:36:36.459 | 1.00th=[ 2147], 5.00th=[16581], 10.00th=[22938], 20.00th=[23462], 00:36:36.459 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:36.459 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.459 | 99.00th=[25822], 99.50th=[34866], 99.90th=[40633], 99.95th=[40633], 00:36:36.459 | 99.99th=[40633] 00:36:36.459 bw ( KiB/s): min= 2560, max= 3888, per=4.25%, avg=2755.58, stdev=290.80, samples=19 00:36:36.459 iops : min= 640, max= 972, avg=688.84, stdev=72.69, samples=19 00:36:36.459 lat (msec) : 2=0.36%, 4=1.24%, 10=0.49%, 20=5.29%, 50=92.62% 00:36:36.459 cpu : usr=99.08%, sys=0.64%, ctx=9, majf=0, minf=51 00:36:36.459 IO depths : 1=5.6%, 2=11.3%, 4=23.3%, 8=52.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:36.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 issued rwts: total=6882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.459 filename1: (groupid=0, jobs=1): err= 0: pid=2237644: Fri Oct 11 12:13:37 2024 00:36:36.459 read: IOPS=663, BW=2654KiB/s (2717kB/s)(25.9MiB/10003msec) 00:36:36.459 slat (nsec): min=5687, max=76566, avg=15310.99, stdev=11296.00 00:36:36.459 clat (usec): min=3472, max=68257, avg=24048.33, stdev=2466.65 00:36:36.459 lat (usec): min=3479, max=68273, avg=24063.64, stdev=2466.94 00:36:36.459 clat percentiles (usec): 00:36:36.459 | 1.00th=[14877], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:36.459 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:36:36.459 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:36.459 | 99.00th=[31327], 99.50th=[33424], 99.90th=[54789], 99.95th=[54789], 00:36:36.459 | 99.99th=[68682] 00:36:36.459 bw ( KiB/s): min= 2436, max= 2688, per=4.07%, avg=2638.74, stdev=62.22, samples=19 00:36:36.459 iops : min= 609, max= 672, avg=659.63, stdev=15.52, samples=19 00:36:36.459 lat (msec) : 4=0.06%, 10=0.33%, 20=1.40%, 50=97.97%, 100=0.24% 00:36:36.459 cpu : usr=98.97%, sys=0.73%, ctx=13, majf=0, minf=36 00:36:36.459 IO depths : 1=0.2%, 2=0.5%, 4=3.8%, 8=78.0%, 16=17.5%, 32=0.0%, >=64=0.0% 00:36:36.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 complete : 0=0.0%, 4=89.6%, 8=7.9%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.459 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.459 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.459 filename1: (groupid=0, jobs=1): err= 0: pid=2237645: Fri Oct 11 12:13:37 2024 00:36:36.459 read: IOPS=669, BW=2680KiB/s (2744kB/s)(26.2MiB/10007msec) 00:36:36.459 slat (nsec): min=5814, max=80013, avg=18441.63, stdev=11990.96 00:36:36.459 clat (usec): min=8975, max=34244, avg=23714.45, stdev=1554.18 00:36:36.459 lat (usec): min=8987, max=34250, avg=23732.89, stdev=1554.63 00:36:36.459 clat percentiles (usec): 00:36:36.459 | 1.00th=[14877], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:36.459 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.459 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.459 | 99.00th=[25560], 99.50th=[25822], 99.90th=[32113], 99.95th=[34341], 00:36:36.459 | 99.99th=[34341] 00:36:36.460 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2680.95, stdev=79.51, samples=19 00:36:36.460 iops : min= 640, max= 704, avg=670.21, stdev=19.88, samples=19 00:36:36.460 lat (msec) : 10=0.04%, 20=2.85%, 50=97.11% 00:36:36.460 cpu : usr=98.73%, sys=0.84%, ctx=65, majf=0, minf=56 00:36:36.460 IO depths : 1=6.1%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:36.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.460 filename1: (groupid=0, jobs=1): err= 0: pid=2237646: Fri Oct 11 12:13:37 2024 00:36:36.460 read: IOPS=677, BW=2712KiB/s (2777kB/s)(26.5MiB/10003msec) 00:36:36.460 slat (usec): min=5, max=101, avg=16.38, stdev=13.37 00:36:36.460 clat (usec): min=7681, max=40987, avg=23524.76, stdev=3558.83 00:36:36.460 lat (usec): min=7687, max=41006, avg=23541.14, stdev=3559.25 00:36:36.460 clat percentiles (usec): 00:36:36.460 | 1.00th=[13435], 5.00th=[17171], 10.00th=[19268], 20.00th=[21103], 00:36:36.460 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.460 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27132], 95.00th=[29754], 00:36:36.460 | 99.00th=[34866], 99.50th=[36963], 99.90th=[41157], 99.95th=[41157], 00:36:36.460 | 99.99th=[41157] 00:36:36.460 bw ( KiB/s): min= 2501, max= 2832, per=4.16%, avg=2698.58, stdev=73.86, samples=19 00:36:36.460 iops : min= 625, max= 708, avg=674.58, stdev=18.52, samples=19 00:36:36.460 lat (msec) : 10=0.10%, 20=13.87%, 50=86.02% 00:36:36.460 cpu : usr=98.45%, sys=1.07%, ctx=116, majf=0, minf=26 00:36:36.460 IO depths : 1=0.3%, 2=0.7%, 4=4.0%, 8=79.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:36:36.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 complete : 0=0.0%, 4=89.4%, 8=8.2%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 issued rwts: total=6782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.460 filename1: (groupid=0, jobs=1): err= 0: pid=2237647: Fri Oct 11 12:13:37 2024 00:36:36.460 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10002msec) 00:36:36.460 slat (nsec): min=5673, max=88533, avg=19144.78, stdev=14575.61 00:36:36.460 clat (usec): min=2007, max=45565, avg=24020.63, stdev=2114.51 00:36:36.460 lat (usec): min=2013, max=45591, avg=24039.78, stdev=2114.97 00:36:36.460 clat percentiles (usec): 00:36:36.460 | 1.00th=[17957], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:36.460 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.460 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:36.460 | 99.00th=[30802], 99.50th=[33817], 99.90th=[45351], 99.95th=[45351], 00:36:36.460 | 99.99th=[45351] 00:36:36.460 bw ( KiB/s): min= 2480, max= 2688, per=4.07%, avg=2640.79, stdev=56.56, samples=19 00:36:36.460 iops : min= 620, max= 672, avg=660.16, stdev=14.12, samples=19 00:36:36.460 lat (msec) : 4=0.14%, 10=0.30%, 20=1.37%, 50=98.19% 00:36:36.460 cpu : usr=98.79%, sys=0.83%, ctx=89, majf=0, minf=33 00:36:36.460 IO depths : 1=0.3%, 2=0.5%, 4=2.4%, 8=79.8%, 16=17.0%, 32=0.0%, >=64=0.0% 00:36:36.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 complete : 0=0.0%, 4=89.6%, 8=9.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.460 filename1: (groupid=0, jobs=1): err= 0: pid=2237648: Fri Oct 11 12:13:37 2024 00:36:36.460 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10006msec) 00:36:36.460 slat (usec): min=5, max=106, avg=25.28, stdev=16.15 00:36:36.460 clat (usec): min=9725, max=33512, avg=23826.54, stdev=1163.57 00:36:36.460 lat (usec): min=9731, max=33536, avg=23851.82, stdev=1163.06 00:36:36.460 clat percentiles (usec): 00:36:36.460 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.460 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:36.460 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.460 | 99.00th=[25297], 99.50th=[25560], 99.90th=[33424], 99.95th=[33424], 00:36:36.460 | 99.99th=[33424] 00:36:36.460 bw ( KiB/s): min= 2560, max= 2688, per=4.09%, avg=2653.68, stdev=57.55, samples=19 00:36:36.460 iops : min= 640, max= 672, avg=663.37, stdev=14.36, samples=19 00:36:36.460 lat (msec) : 10=0.24%, 20=0.27%, 50=99.49% 00:36:36.460 cpu : usr=98.67%, sys=0.84%, ctx=133, majf=0, minf=37 00:36:36.460 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:36.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.460 filename1: (groupid=0, jobs=1): err= 0: pid=2237649: Fri Oct 11 12:13:37 2024 00:36:36.460 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:36:36.460 slat (nsec): min=5854, max=77173, avg=20852.49, stdev=12626.58 00:36:36.460 clat (usec): min=13263, max=27851, avg=23856.89, stdev=810.98 00:36:36.460 lat (usec): min=13288, max=27878, avg=23877.74, stdev=810.68 00:36:36.460 clat percentiles (usec): 00:36:36.460 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.460 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.460 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.460 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25822], 99.95th=[27395], 00:36:36.460 | 99.99th=[27919] 00:36:36.460 bw ( KiB/s): min= 2554, max= 2688, per=4.10%, avg=2660.42, stdev=54.11, samples=19 00:36:36.460 iops : min= 638, max= 672, avg=665.05, stdev=13.57, samples=19 00:36:36.460 lat (msec) : 20=0.51%, 50=99.49% 00:36:36.460 cpu : usr=97.73%, sys=1.50%, ctx=245, majf=0, minf=31 00:36:36.460 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:36.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.460 filename1: (groupid=0, jobs=1): err= 0: pid=2237650: Fri Oct 11 12:13:37 2024 00:36:36.460 read: IOPS=683, BW=2733KiB/s (2799kB/s)(26.7MiB/10016msec) 00:36:36.460 slat (usec): min=5, max=105, avg=20.51, stdev=16.26 00:36:36.460 clat (usec): min=8929, max=38308, avg=23233.75, stdev=3136.43 00:36:36.460 lat (usec): min=8952, max=38314, avg=23254.26, stdev=3138.62 00:36:36.460 clat percentiles (usec): 00:36:36.460 | 1.00th=[14222], 5.00th=[16581], 10.00th=[17957], 20.00th=[23200], 00:36:36.460 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:36.460 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25297], 00:36:36.460 | 99.00th=[32637], 99.50th=[33817], 99.90th=[38011], 99.95th=[38536], 00:36:36.460 | 99.99th=[38536] 00:36:36.460 bw ( KiB/s): min= 2560, max= 3328, per=4.21%, avg=2730.90, stdev=172.03, samples=20 00:36:36.460 iops : min= 640, max= 832, avg=682.70, stdev=43.01, samples=20 00:36:36.460 lat (msec) : 10=0.03%, 20=12.65%, 50=87.32% 00:36:36.460 cpu : usr=98.82%, sys=0.81%, ctx=64, majf=0, minf=50 00:36:36.460 IO depths : 1=4.1%, 2=9.1%, 4=22.0%, 8=56.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:36.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 issued rwts: total=6844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.460 filename2: (groupid=0, jobs=1): err= 0: pid=2237651: Fri Oct 11 12:13:37 2024 00:36:36.460 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10016msec) 00:36:36.460 slat (nsec): min=5457, max=76378, avg=15175.03, stdev=11563.24 00:36:36.460 clat (usec): min=10482, max=26100, avg=23838.22, stdev=1265.01 00:36:36.460 lat (usec): min=10527, max=26107, avg=23853.39, stdev=1264.32 00:36:36.460 clat percentiles (usec): 00:36:36.460 | 1.00th=[16712], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.460 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.460 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:36.460 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:36:36.460 | 99.99th=[26084] 00:36:36.460 bw ( KiB/s): min= 2560, max= 2816, per=4.11%, avg=2668.50, stdev=62.56, samples=20 00:36:36.460 iops : min= 640, max= 704, avg=667.10, stdev=15.63, samples=20 00:36:36.460 lat (msec) : 20=1.44%, 50=98.56% 00:36:36.460 cpu : usr=98.70%, sys=0.81%, ctx=135, majf=0, minf=25 00:36:36.460 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:36.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.460 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.460 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.460 filename2: (groupid=0, jobs=1): err= 0: pid=2237652: Fri Oct 11 12:13:37 2024 00:36:36.460 read: IOPS=686, BW=2747KiB/s (2813kB/s)(26.8MiB/10002msec) 00:36:36.460 slat (nsec): min=5760, max=96475, avg=17844.49, stdev=14284.16 00:36:36.460 clat (usec): min=5255, max=45422, avg=23175.37, stdev=4678.60 00:36:36.460 lat (usec): min=5261, max=45444, avg=23193.22, stdev=4679.32 00:36:36.460 clat percentiles (usec): 00:36:36.461 | 1.00th=[12387], 5.00th=[15401], 10.00th=[16712], 20.00th=[20055], 00:36:36.461 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:36.461 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26608], 95.00th=[32113], 00:36:36.461 | 99.00th=[39060], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:36:36.461 | 99.99th=[45351] 00:36:36.461 bw ( KiB/s): min= 2528, max= 2960, per=4.22%, avg=2735.11, stdev=103.01, samples=19 00:36:36.461 iops : min= 632, max= 740, avg=683.74, stdev=25.76, samples=19 00:36:36.461 lat (msec) : 10=0.52%, 20=19.27%, 50=80.20% 00:36:36.461 cpu : usr=98.50%, sys=1.08%, ctx=150, majf=0, minf=35 00:36:36.461 IO depths : 1=1.0%, 2=3.5%, 4=12.6%, 8=70.0%, 16=12.9%, 32=0.0%, >=64=0.0% 00:36:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 complete : 0=0.0%, 4=91.2%, 8=4.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 issued rwts: total=6870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.461 filename2: (groupid=0, jobs=1): err= 0: pid=2237653: Fri Oct 11 12:13:37 2024 00:36:36.461 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10013msec) 00:36:36.461 slat (nsec): min=5871, max=77119, avg=21765.30, stdev=13105.26 00:36:36.461 clat (usec): min=8708, max=32731, avg=23800.11, stdev=1401.44 00:36:36.461 lat (usec): min=8720, max=32752, avg=23821.87, stdev=1401.36 00:36:36.461 clat percentiles (usec): 00:36:36.461 | 1.00th=[17957], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:36.461 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:36.461 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.461 | 99.00th=[25560], 99.50th=[28705], 99.90th=[32637], 99.95th=[32637], 00:36:36.461 | 99.99th=[32637] 00:36:36.461 bw ( KiB/s): min= 2560, max= 2840, per=4.11%, avg=2663.30, stdev=69.91, samples=20 00:36:36.461 iops : min= 640, max= 710, avg=665.80, stdev=17.47, samples=20 00:36:36.461 lat (msec) : 10=0.18%, 20=1.17%, 50=98.65% 00:36:36.461 cpu : usr=98.97%, sys=0.71%, ctx=36, majf=0, minf=39 00:36:36.461 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 issued rwts: total=6675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.461 filename2: (groupid=0, jobs=1): err= 0: pid=2237654: Fri Oct 11 12:13:37 2024 00:36:36.461 read: IOPS=668, BW=2675KiB/s (2740kB/s)(26.1MiB/10002msec) 00:36:36.461 slat (usec): min=5, max=108, avg=25.30, stdev=15.69 00:36:36.461 clat (usec): min=7650, max=58844, avg=23675.86, stdev=2073.21 00:36:36.461 lat (usec): min=7669, max=58862, avg=23701.16, stdev=2074.04 00:36:36.461 clat percentiles (usec): 00:36:36.461 | 1.00th=[14615], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:36.461 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:36.461 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.461 | 99.00th=[25560], 99.50th=[28443], 99.90th=[45351], 99.95th=[45351], 00:36:36.461 | 99.99th=[58983] 00:36:36.461 bw ( KiB/s): min= 2528, max= 2864, per=4.10%, avg=2661.84, stdev=77.88, samples=19 00:36:36.461 iops : min= 632, max= 716, avg=665.42, stdev=19.46, samples=19 00:36:36.461 lat (msec) : 10=0.48%, 20=1.99%, 50=97.50%, 100=0.03% 00:36:36.461 cpu : usr=99.01%, sys=0.66%, ctx=58, majf=0, minf=30 00:36:36.461 IO depths : 1=5.8%, 2=11.6%, 4=23.4%, 8=52.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 issued rwts: total=6690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.461 filename2: (groupid=0, jobs=1): err= 0: pid=2237655: Fri Oct 11 12:13:37 2024 00:36:36.461 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10016msec) 00:36:36.461 slat (nsec): min=5852, max=69345, avg=14522.98, stdev=9038.56 00:36:36.461 clat (usec): min=10508, max=26215, avg=23838.64, stdev=1263.71 00:36:36.461 lat (usec): min=10524, max=26224, avg=23853.16, stdev=1263.48 00:36:36.461 clat percentiles (usec): 00:36:36.461 | 1.00th=[16712], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.461 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.461 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:36:36.461 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:36:36.461 | 99.99th=[26346] 00:36:36.461 bw ( KiB/s): min= 2560, max= 2816, per=4.11%, avg=2668.50, stdev=62.56, samples=20 00:36:36.461 iops : min= 640, max= 704, avg=667.10, stdev=15.63, samples=20 00:36:36.461 lat (msec) : 20=1.44%, 50=98.56% 00:36:36.461 cpu : usr=98.80%, sys=0.82%, ctx=70, majf=0, minf=29 00:36:36.461 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.461 filename2: (groupid=0, jobs=1): err= 0: pid=2237656: Fri Oct 11 12:13:37 2024 00:36:36.461 read: IOPS=665, BW=2662KiB/s (2725kB/s)(26.0MiB/10003msec) 00:36:36.461 slat (nsec): min=5835, max=68572, avg=18582.60, stdev=10660.97 00:36:36.461 clat (usec): min=7566, max=41464, avg=23877.90, stdev=1583.71 00:36:36.461 lat (usec): min=7572, max=41482, avg=23896.48, stdev=1584.11 00:36:36.461 clat percentiles (usec): 00:36:36.461 | 1.00th=[17695], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:36.461 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.461 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.461 | 99.00th=[25822], 99.50th=[30278], 99.90th=[41157], 99.95th=[41681], 00:36:36.461 | 99.99th=[41681] 00:36:36.461 bw ( KiB/s): min= 2560, max= 2688, per=4.09%, avg=2653.68, stdev=57.55, samples=19 00:36:36.461 iops : min= 640, max= 672, avg=663.37, stdev=14.36, samples=19 00:36:36.461 lat (msec) : 10=0.24%, 20=1.05%, 50=98.71% 00:36:36.461 cpu : usr=98.91%, sys=0.75%, ctx=93, majf=0, minf=41 00:36:36.461 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.461 filename2: (groupid=0, jobs=1): err= 0: pid=2237657: Fri Oct 11 12:13:37 2024 00:36:36.461 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10012msec) 00:36:36.461 slat (nsec): min=5812, max=68515, avg=16565.49, stdev=9580.28 00:36:36.461 clat (usec): min=16618, max=33078, avg=23913.41, stdev=918.39 00:36:36.461 lat (usec): min=16625, max=33105, avg=23929.98, stdev=918.35 00:36:36.461 clat percentiles (usec): 00:36:36.461 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.461 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:36.461 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.461 | 99.00th=[25560], 99.50th=[25822], 99.90th=[32900], 99.95th=[33162], 00:36:36.461 | 99.99th=[33162] 00:36:36.461 bw ( KiB/s): min= 2560, max= 2688, per=4.09%, avg=2653.37, stdev=57.37, samples=19 00:36:36.461 iops : min= 640, max= 672, avg=663.26, stdev=14.30, samples=19 00:36:36.461 lat (msec) : 20=0.72%, 50=99.28% 00:36:36.461 cpu : usr=98.86%, sys=0.85%, ctx=10, majf=0, minf=34 00:36:36.461 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.461 filename2: (groupid=0, jobs=1): err= 0: pid=2237658: Fri Oct 11 12:13:37 2024 00:36:36.461 read: IOPS=665, BW=2662KiB/s (2725kB/s)(26.0MiB/10003msec) 00:36:36.461 slat (nsec): min=5981, max=89421, avg=24115.81, stdev=13198.52 00:36:36.461 clat (usec): min=7524, max=46072, avg=23824.94, stdev=1688.61 00:36:36.461 lat (usec): min=7537, max=46090, avg=23849.06, stdev=1688.36 00:36:36.461 clat percentiles (usec): 00:36:36.461 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:36:36.461 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:36.461 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:36.461 | 99.00th=[25560], 99.50th=[25560], 99.90th=[45876], 99.95th=[45876], 00:36:36.461 | 99.99th=[45876] 00:36:36.461 bw ( KiB/s): min= 2436, max= 2688, per=4.08%, avg=2647.16, stdev=73.57, samples=19 00:36:36.461 iops : min= 609, max= 672, avg=661.74, stdev=18.37, samples=19 00:36:36.461 lat (msec) : 10=0.48%, 20=0.24%, 50=99.28% 00:36:36.461 cpu : usr=99.10%, sys=0.61%, ctx=18, majf=0, minf=27 00:36:36.461 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:36.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.461 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:36.461 00:36:36.461 Run status group 0 (all jobs): 00:36:36.461 READ: bw=63.3MiB/s (66.4MB/s), 2654KiB/s-3148KiB/s (2717kB/s-3224kB/s), io=635MiB (666MB), run=10002-10032msec 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.461 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 bdev_null0 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 [2024-10-11 12:13:37.890296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 bdev_null1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:36.462 { 00:36:36.462 "params": { 00:36:36.462 "name": "Nvme$subsystem", 00:36:36.462 "trtype": "$TEST_TRANSPORT", 00:36:36.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:36.462 "adrfam": "ipv4", 00:36:36.462 "trsvcid": "$NVMF_PORT", 00:36:36.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:36.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:36.462 "hdgst": ${hdgst:-false}, 00:36:36.462 "ddgst": ${ddgst:-false} 00:36:36.462 }, 00:36:36.462 "method": "bdev_nvme_attach_controller" 00:36:36.462 } 00:36:36.462 EOF 00:36:36.462 )") 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:36.462 { 00:36:36.462 "params": { 00:36:36.462 "name": "Nvme$subsystem", 00:36:36.462 "trtype": "$TEST_TRANSPORT", 00:36:36.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:36.462 "adrfam": "ipv4", 00:36:36.462 "trsvcid": "$NVMF_PORT", 00:36:36.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:36.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:36.462 "hdgst": ${hdgst:-false}, 00:36:36.462 "ddgst": ${ddgst:-false} 00:36:36.462 }, 00:36:36.462 "method": "bdev_nvme_attach_controller" 00:36:36.462 } 00:36:36.462 EOF 00:36:36.462 )") 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:36.462 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:36.463 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:36.463 12:13:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:36.463 "params": { 00:36:36.463 "name": "Nvme0", 00:36:36.463 "trtype": "tcp", 00:36:36.463 "traddr": "10.0.0.2", 00:36:36.463 "adrfam": "ipv4", 00:36:36.463 "trsvcid": "4420", 00:36:36.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:36.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:36.463 "hdgst": false, 00:36:36.463 "ddgst": false 00:36:36.463 }, 00:36:36.463 "method": "bdev_nvme_attach_controller" 00:36:36.463 },{ 00:36:36.463 "params": { 00:36:36.463 "name": "Nvme1", 00:36:36.463 "trtype": "tcp", 00:36:36.463 "traddr": "10.0.0.2", 00:36:36.463 "adrfam": "ipv4", 00:36:36.463 "trsvcid": "4420", 00:36:36.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:36.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:36.463 "hdgst": false, 00:36:36.463 "ddgst": false 00:36:36.463 }, 00:36:36.463 "method": "bdev_nvme_attach_controller" 00:36:36.463 }' 00:36:36.463 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:36.463 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:36.463 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:36.463 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.463 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:36.463 12:13:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:36.463 12:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:36.463 12:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:36.463 12:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:36.463 12:13:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.463 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:36.463 ... 00:36:36.463 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:36.463 ... 00:36:36.463 fio-3.35 00:36:36.463 Starting 4 threads 00:36:41.747 00:36:41.747 filename0: (groupid=0, jobs=1): err= 0: pid=2240007: Fri Oct 11 12:13:44 2024 00:36:41.747 read: IOPS=2907, BW=22.7MiB/s (23.8MB/s)(114MiB/5002msec) 00:36:41.747 slat (usec): min=5, max=107, avg= 8.95, stdev= 3.21 00:36:41.747 clat (usec): min=1733, max=4956, avg=2726.66, stdev=189.66 00:36:41.747 lat (usec): min=1756, max=4965, avg=2735.61, stdev=189.74 00:36:41.747 clat percentiles (usec): 00:36:41.747 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2671], 00:36:41.747 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:41.747 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2966], 00:36:41.747 | 99.00th=[ 3490], 99.50th=[ 3785], 99.90th=[ 4424], 99.95th=[ 4621], 00:36:41.747 | 99.99th=[ 4948] 00:36:41.747 bw ( KiB/s): min=23088, max=23360, per=24.69%, avg=23249.78, stdev=96.48, samples=9 00:36:41.747 iops : min= 2886, max= 2920, avg=2906.22, stdev=12.06, samples=9 00:36:41.747 lat (msec) : 2=0.06%, 4=99.64%, 10=0.30% 00:36:41.747 cpu : usr=96.76%, sys=2.96%, ctx=7, majf=0, minf=99 00:36:41.747 IO depths : 1=0.1%, 2=0.2%, 4=73.8%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.747 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.747 issued rwts: total=14544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:41.747 filename0: (groupid=0, jobs=1): err= 0: pid=2240008: Fri Oct 11 12:13:44 2024 00:36:41.747 read: IOPS=2921, BW=22.8MiB/s (23.9MB/s)(114MiB/5003msec) 00:36:41.747 slat (nsec): min=5657, max=50834, avg=8193.85, stdev=2831.18 00:36:41.747 clat (usec): min=1497, max=4549, avg=2715.20, stdev=152.21 00:36:41.747 lat (usec): min=1513, max=4561, avg=2723.40, stdev=152.21 00:36:41.747 clat percentiles (usec): 00:36:41.747 | 1.00th=[ 2311], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2671], 00:36:41.747 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:41.747 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2933], 00:36:41.747 | 99.00th=[ 3228], 99.50th=[ 3458], 99.90th=[ 4113], 99.95th=[ 4228], 00:36:41.747 | 99.99th=[ 4555] 00:36:41.747 bw ( KiB/s): min=23217, max=23600, per=24.83%, avg=23379.67, stdev=124.74, samples=9 00:36:41.747 iops : min= 2902, max= 2950, avg=2922.44, stdev=15.61, samples=9 00:36:41.747 lat (msec) : 2=0.27%, 4=99.58%, 10=0.14% 00:36:41.747 cpu : usr=96.46%, sys=3.26%, ctx=6, majf=0, minf=44 00:36:41.747 IO depths : 1=0.1%, 2=0.1%, 4=72.5%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.747 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.747 issued rwts: total=14618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:41.747 filename1: (groupid=0, jobs=1): err= 0: pid=2240010: Fri Oct 11 12:13:44 2024 00:36:41.747 read: IOPS=2933, BW=22.9MiB/s (24.0MB/s)(115MiB/5001msec) 00:36:41.747 slat (nsec): min=5639, max=71193, avg=9221.62, stdev=3451.52 00:36:41.747 clat (usec): min=970, max=3769, avg=2706.67, stdev=153.58 00:36:41.747 lat (usec): min=978, max=3778, avg=2715.89, stdev=153.52 00:36:41.747 clat percentiles (usec): 00:36:41.747 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2671], 00:36:41.747 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:41.747 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2966], 00:36:41.747 | 99.00th=[ 3195], 99.50th=[ 3425], 99.90th=[ 3687], 99.95th=[ 3720], 00:36:41.747 | 99.99th=[ 3752] 00:36:41.747 bw ( KiB/s): min=23312, max=23600, per=24.93%, avg=23473.78, stdev=93.45, samples=9 00:36:41.747 iops : min= 2914, max= 2950, avg=2934.22, stdev=11.68, samples=9 00:36:41.747 lat (usec) : 1000=0.02% 00:36:41.747 lat (msec) : 2=0.20%, 4=99.78% 00:36:41.747 cpu : usr=96.76%, sys=2.96%, ctx=12, majf=0, minf=93 00:36:41.747 IO depths : 1=0.1%, 2=0.1%, 4=65.6%, 8=34.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.747 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.747 issued rwts: total=14671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.747 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:41.747 filename1: (groupid=0, jobs=1): err= 0: pid=2240011: Fri Oct 11 12:13:44 2024 00:36:41.747 read: IOPS=3008, BW=23.5MiB/s (24.6MB/s)(118MiB/5002msec) 00:36:41.747 slat (nsec): min=8228, max=88378, avg=9528.38, stdev=2674.98 00:36:41.747 clat (usec): min=1197, max=4323, avg=2633.25, stdev=289.43 00:36:41.747 lat (usec): min=1207, max=4343, avg=2642.77, stdev=289.38 00:36:41.747 clat percentiles (usec): 00:36:41.747 | 1.00th=[ 1958], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2474], 00:36:41.747 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:36:41.747 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 3163], 00:36:41.747 | 99.00th=[ 3621], 99.50th=[ 3654], 99.90th=[ 3916], 99.95th=[ 4015], 00:36:41.747 | 99.99th=[ 4293] 00:36:41.747 bw ( KiB/s): min=23888, max=24352, per=25.62%, avg=24119.11, stdev=185.92, samples=9 00:36:41.747 iops : min= 2986, max= 3044, avg=3014.89, stdev=23.24, samples=9 00:36:41.748 lat (msec) : 2=1.87%, 4=98.06%, 10=0.07% 00:36:41.748 cpu : usr=94.62%, sys=3.56%, ctx=190, majf=0, minf=71 00:36:41.748 IO depths : 1=0.1%, 2=0.5%, 4=71.5%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.748 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.748 issued rwts: total=15049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.748 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:41.748 00:36:41.748 Run status group 0 (all jobs): 00:36:41.748 READ: bw=91.9MiB/s (96.4MB/s), 22.7MiB/s-23.5MiB/s (23.8MB/s-24.6MB/s), io=460MiB (482MB), run=5001-5003msec 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.748 00:36:41.748 real 0m24.468s 00:36:41.748 user 5m16.651s 00:36:41.748 sys 0m4.641s 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 ************************************ 00:36:41.748 END TEST fio_dif_rand_params 00:36:41.748 ************************************ 00:36:41.748 12:13:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:41.748 12:13:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:41.748 12:13:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 ************************************ 00:36:41.748 START TEST fio_dif_digest 00:36:41.748 ************************************ 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 bdev_null0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:41.748 [2024-10-11 12:13:44.378421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:41.748 { 00:36:41.748 "params": { 00:36:41.748 "name": "Nvme$subsystem", 00:36:41.748 "trtype": "$TEST_TRANSPORT", 00:36:41.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:41.748 "adrfam": "ipv4", 00:36:41.748 "trsvcid": "$NVMF_PORT", 00:36:41.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:41.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:41.748 "hdgst": ${hdgst:-false}, 00:36:41.748 "ddgst": ${ddgst:-false} 00:36:41.748 }, 00:36:41.748 "method": "bdev_nvme_attach_controller" 00:36:41.748 } 00:36:41.748 EOF 00:36:41.748 )") 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:41.748 "params": { 00:36:41.748 "name": "Nvme0", 00:36:41.748 "trtype": "tcp", 00:36:41.748 "traddr": "10.0.0.2", 00:36:41.748 "adrfam": "ipv4", 00:36:41.748 "trsvcid": "4420", 00:36:41.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:41.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:41.748 "hdgst": true, 00:36:41.748 "ddgst": true 00:36:41.748 }, 00:36:41.748 "method": "bdev_nvme_attach_controller" 00:36:41.748 }' 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:41.748 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:42.031 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:42.031 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:42.031 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:42.031 12:13:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:42.299 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:42.299 ... 00:36:42.299 fio-3.35 00:36:42.299 Starting 3 threads 00:36:54.524 00:36:54.524 filename0: (groupid=0, jobs=1): err= 0: pid=2241350: Fri Oct 11 12:13:55 2024 00:36:54.524 read: IOPS=348, BW=43.5MiB/s (45.6MB/s)(437MiB/10048msec) 00:36:54.524 slat (nsec): min=5917, max=31495, avg=8069.65, stdev=1689.50 00:36:54.524 clat (usec): min=5616, max=91205, avg=8595.53, stdev=3189.43 00:36:54.524 lat (usec): min=5622, max=91211, avg=8603.60, stdev=3189.36 00:36:54.524 clat percentiles (usec): 00:36:54.524 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7308], 00:36:54.524 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8717], 00:36:54.524 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10421], 00:36:54.524 | 99.00th=[11338], 99.50th=[11994], 99.90th=[51119], 99.95th=[90702], 00:36:54.524 | 99.99th=[90702] 00:36:54.524 bw ( KiB/s): min=39168, max=48128, per=41.95%, avg=44744.30, stdev=2284.06, samples=20 00:36:54.524 iops : min= 306, max= 376, avg=349.55, stdev=17.84, samples=20 00:36:54.524 lat (msec) : 10=88.11%, 20=11.55%, 50=0.20%, 100=0.14% 00:36:54.524 cpu : usr=94.32%, sys=5.45%, ctx=9, majf=0, minf=171 00:36:54.524 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.524 issued rwts: total=3498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.524 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:54.524 filename0: (groupid=0, jobs=1): err= 0: pid=2241351: Fri Oct 11 12:13:55 2024 00:36:54.524 read: IOPS=143, BW=18.0MiB/s (18.9MB/s)(181MiB/10046msec) 00:36:54.524 slat (nsec): min=5931, max=34469, avg=7066.47, stdev=1576.73 00:36:54.524 clat (msec): min=6, max=131, avg=20.82, stdev=20.05 00:36:54.524 lat (msec): min=6, max=131, avg=20.82, stdev=20.05 00:36:54.524 clat percentiles (msec): 00:36:54.524 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:36:54.524 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:36:54.524 | 70.00th=[ 12], 80.00th=[ 51], 90.00th=[ 52], 95.00th=[ 53], 00:36:54.524 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 94], 99.95th=[ 132], 00:36:54.524 | 99.99th=[ 132] 00:36:54.524 bw ( KiB/s): min= 8448, max=26880, per=17.32%, avg=18470.40, stdev=4730.10, samples=20 00:36:54.524 iops : min= 66, max= 210, avg=144.30, stdev=36.95, samples=20 00:36:54.524 lat (msec) : 10=35.92%, 20=40.00%, 50=3.25%, 100=20.76%, 250=0.07% 00:36:54.524 cpu : usr=95.65%, sys=4.13%, ctx=21, majf=0, minf=123 00:36:54.524 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.524 issued rwts: total=1445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.524 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:54.524 filename0: (groupid=0, jobs=1): err= 0: pid=2241352: Fri Oct 11 12:13:55 2024 00:36:54.524 read: IOPS=341, BW=42.7MiB/s (44.7MB/s)(429MiB/10045msec) 00:36:54.524 slat (nsec): min=5845, max=31145, avg=6813.94, stdev=741.86 00:36:54.524 clat (usec): min=4494, max=51426, avg=8767.99, stdev=2058.75 00:36:54.524 lat (usec): min=4501, max=51457, avg=8774.80, stdev=2059.00 00:36:54.524 clat percentiles (usec): 00:36:54.524 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7177], 20.00th=[ 7504], 00:36:54.524 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8979], 00:36:54.524 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10683], 95.00th=[11076], 00:36:54.524 | 99.00th=[11863], 99.50th=[12125], 99.90th=[46924], 99.95th=[51643], 00:36:54.524 | 99.99th=[51643] 00:36:54.524 bw ( KiB/s): min=38912, max=47872, per=41.13%, avg=43865.60, stdev=2232.18, samples=20 00:36:54.524 iops : min= 304, max= 374, avg=342.70, stdev=17.44, samples=20 00:36:54.524 lat (msec) : 10=79.12%, 20=20.73%, 50=0.06%, 100=0.09% 00:36:54.524 cpu : usr=94.42%, sys=5.36%, ctx=19, majf=0, minf=111 00:36:54.524 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.524 issued rwts: total=3429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.524 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:54.524 00:36:54.524 Run status group 0 (all jobs): 00:36:54.524 READ: bw=104MiB/s (109MB/s), 18.0MiB/s-43.5MiB/s (18.9MB/s-45.6MB/s), io=1047MiB (1097MB), run=10045-10048msec 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:54.524 12:13:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.524 00:36:54.524 real 0m11.095s 00:36:54.525 user 0m42.146s 00:36:54.525 sys 0m1.830s 00:36:54.525 12:13:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:54.525 12:13:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:54.525 ************************************ 00:36:54.525 END TEST fio_dif_digest 00:36:54.525 ************************************ 00:36:54.525 12:13:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:54.525 12:13:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:54.525 rmmod nvme_tcp 00:36:54.525 rmmod nvme_fabrics 00:36:54.525 rmmod nvme_keyring 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 2231087 ']' 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 2231087 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2231087 ']' 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2231087 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2231087 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2231087' 00:36:54.525 killing process with pid 2231087 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2231087 00:36:54.525 12:13:55 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2231087 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:54.525 12:13:55 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:56.440 Waiting for block devices as requested 00:36:56.702 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:56.702 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:56.702 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:56.962 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:56.962 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:56.962 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:57.223 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:57.223 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:57.223 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:57.484 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:57.484 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:57.744 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:57.744 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:57.744 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:58.004 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:58.004 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:58.004 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:58.265 12:14:00 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.265 12:14:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:58.265 12:14:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.811 12:14:02 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:00.811 00:37:00.811 real 1m18.357s 00:37:00.811 user 7m57.135s 00:37:00.811 sys 0m22.305s 00:37:00.811 12:14:02 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:00.811 12:14:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:00.811 ************************************ 00:37:00.811 END TEST nvmf_dif 00:37:00.811 ************************************ 00:37:00.811 12:14:03 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:00.811 12:14:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:00.811 12:14:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:00.811 12:14:03 -- common/autotest_common.sh@10 -- # set +x 00:37:00.811 ************************************ 00:37:00.811 START TEST nvmf_abort_qd_sizes 00:37:00.811 ************************************ 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:00.811 * Looking for test storage... 00:37:00.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:00.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.811 --rc genhtml_branch_coverage=1 00:37:00.811 --rc genhtml_function_coverage=1 00:37:00.811 --rc genhtml_legend=1 00:37:00.811 --rc geninfo_all_blocks=1 00:37:00.811 --rc geninfo_unexecuted_blocks=1 00:37:00.811 00:37:00.811 ' 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:00.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.811 --rc genhtml_branch_coverage=1 00:37:00.811 --rc genhtml_function_coverage=1 00:37:00.811 --rc genhtml_legend=1 00:37:00.811 --rc geninfo_all_blocks=1 00:37:00.811 --rc geninfo_unexecuted_blocks=1 00:37:00.811 00:37:00.811 ' 00:37:00.811 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:00.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.811 --rc genhtml_branch_coverage=1 00:37:00.811 --rc genhtml_function_coverage=1 00:37:00.812 --rc genhtml_legend=1 00:37:00.812 --rc geninfo_all_blocks=1 00:37:00.812 --rc geninfo_unexecuted_blocks=1 00:37:00.812 00:37:00.812 ' 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:00.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.812 --rc genhtml_branch_coverage=1 00:37:00.812 --rc genhtml_function_coverage=1 00:37:00.812 --rc genhtml_legend=1 00:37:00.812 --rc geninfo_all_blocks=1 00:37:00.812 --rc geninfo_unexecuted_blocks=1 00:37:00.812 00:37:00.812 ' 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:00.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:00.812 12:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:08.954 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:08.954 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:08.954 Found net devices under 0000:31:00.0: cvl_0_0 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:08.954 Found net devices under 0000:31:00.1: cvl_0_1 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:08.954 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:08.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:08.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:37:08.955 00:37:08.955 --- 10.0.0.2 ping statistics --- 00:37:08.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.955 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:08.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:08.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:37:08.955 00:37:08.955 --- 10.0.0.1 ping statistics --- 00:37:08.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.955 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:37:08.955 12:14:10 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:11.501 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:11.762 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=2250962 00:37:12.334 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 2250962 00:37:12.335 12:14:14 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:12.335 12:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2250962 ']' 00:37:12.335 12:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.335 12:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:12.335 12:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.335 12:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:12.335 12:14:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:12.335 [2024-10-11 12:14:14.891832] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:37:12.335 [2024-10-11 12:14:14.891881] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.335 [2024-10-11 12:14:14.977958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:12.335 [2024-10-11 12:14:15.016381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.335 [2024-10-11 12:14:15.016411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.335 [2024-10-11 12:14:15.016419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.335 [2024-10-11 12:14:15.016426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.335 [2024-10-11 12:14:15.016432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.335 [2024-10-11 12:14:15.018091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.335 [2024-10-11 12:14:15.018183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:12.335 [2024-10-11 12:14:15.018315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.335 [2024-10-11 12:14:15.018316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:13.274 12:14:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:13.274 ************************************ 00:37:13.274 START TEST spdk_target_abort 00:37:13.274 ************************************ 00:37:13.274 12:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:37:13.274 12:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:13.274 12:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:13.274 12:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.274 12:14:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.534 spdk_targetn1 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.534 [2024-10-11 12:14:16.094311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.534 [2024-10-11 12:14:16.141313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:13.534 12:14:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:13.794 [2024-10-11 12:14:16.318581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:504 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:13.794 [2024-10-11 12:14:16.318617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0040 p:1 m:0 dnr:0 00:37:13.794 [2024-10-11 12:14:16.358655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1784 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:13.794 [2024-10-11 12:14:16.358683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e0 p:1 m:0 dnr:0 00:37:13.794 [2024-10-11 12:14:16.366673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2008 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:13.794 [2024-10-11 12:14:16.366694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00fd p:1 m:0 dnr:0 00:37:13.794 [2024-10-11 12:14:16.412669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3616 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:13.794 [2024-10-11 12:14:16.412699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00c5 p:0 m:0 dnr:0 00:37:13.794 [2024-10-11 12:14:16.420719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3856 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:13.794 [2024-10-11 12:14:16.420745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00e4 p:0 m:0 dnr:0 00:37:17.087 Initializing NVMe Controllers 00:37:17.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:17.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:17.087 Initialization complete. Launching workers. 00:37:17.087 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11412, failed: 5 00:37:17.087 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2147, failed to submit 9270 00:37:17.087 success 740, unsuccessful 1407, failed 0 00:37:17.087 12:14:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:17.087 12:14:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:17.087 [2024-10-11 12:14:19.593141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:296 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:17.087 [2024-10-11 12:14:19.593185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:002f p:1 m:0 dnr:0 00:37:17.087 [2024-10-11 12:14:19.673090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:2216 len:8 PRP1 0x200004e58000 PRP2 0x0 00:37:17.087 [2024-10-11 12:14:19.673117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:17.087 [2024-10-11 12:14:19.689022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:2536 len:8 PRP1 0x200004e44000 PRP2 0x0 00:37:17.087 [2024-10-11 12:14:19.689044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:17.655 [2024-10-11 12:14:20.118782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:12392 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:37:17.655 [2024-10-11 12:14:20.118819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0015 p:1 m:0 dnr:0 00:37:17.914 [2024-10-11 12:14:20.521333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:21496 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:17.914 [2024-10-11 12:14:20.521370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0087 p:1 m:0 dnr:0 00:37:20.452 Initializing NVMe Controllers 00:37:20.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:20.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:20.452 Initialization complete. Launching workers. 00:37:20.452 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8616, failed: 5 00:37:20.452 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 7360 00:37:20.452 success 313, unsuccessful 948, failed 0 00:37:20.452 12:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:20.452 12:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:21.021 [2024-10-11 12:14:23.616720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:181 nsid:1 lba:89504 len:8 PRP1 0x200004b18000 PRP2 0x0 00:37:21.021 [2024-10-11 12:14:23.616753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:181 cdw0:0 sqhd:0056 p:1 m:0 dnr:0 00:37:21.590 [2024-10-11 12:14:24.278684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:141 nsid:1 lba:166064 len:8 PRP1 0x200004ad0000 PRP2 0x0 00:37:21.590 [2024-10-11 12:14:24.278708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:141 cdw0:0 sqhd:00ba p:0 m:0 dnr:0 00:37:23.495 Initializing NVMe Controllers 00:37:23.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:23.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:23.495 Initialization complete. Launching workers. 00:37:23.495 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43632, failed: 2 00:37:23.495 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2701, failed to submit 40933 00:37:23.495 success 605, unsuccessful 2096, failed 0 00:37:23.495 12:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:23.495 12:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.495 12:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:23.495 12:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.495 12:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:23.495 12:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.495 12:14:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:25.401 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.401 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2250962 00:37:25.401 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2250962 ']' 00:37:25.401 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2250962 00:37:25.401 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:37:25.401 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:25.401 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2250962 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2250962' 00:37:25.402 killing process with pid 2250962 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2250962 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2250962 00:37:25.402 00:37:25.402 real 0m12.111s 00:37:25.402 user 0m49.439s 00:37:25.402 sys 0m1.913s 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:25.402 ************************************ 00:37:25.402 END TEST spdk_target_abort 00:37:25.402 ************************************ 00:37:25.402 12:14:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:25.402 12:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:25.402 12:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:25.402 12:14:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:25.402 ************************************ 00:37:25.402 START TEST kernel_target_abort 00:37:25.402 ************************************ 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:37:25.402 12:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:37:25.402 12:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:25.402 12:14:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:28.705 Waiting for block devices as requested 00:37:28.966 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:28.966 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:28.966 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:29.226 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:29.226 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:29.226 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:29.487 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:29.487 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:29.487 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:29.747 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:29.747 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:29.747 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:30.008 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:30.008 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:30.008 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:30.268 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:30.268 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:30.530 No valid GPT data, bailing 00:37:30.530 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:37:30.791 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:37:30.792 00:37:30.792 Discovery Log Number of Records 2, Generation counter 2 00:37:30.792 =====Discovery Log Entry 0====== 00:37:30.792 trtype: tcp 00:37:30.792 adrfam: ipv4 00:37:30.792 subtype: current discovery subsystem 00:37:30.792 treq: not specified, sq flow control disable supported 00:37:30.792 portid: 1 00:37:30.792 trsvcid: 4420 00:37:30.792 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:30.792 traddr: 10.0.0.1 00:37:30.792 eflags: none 00:37:30.792 sectype: none 00:37:30.792 =====Discovery Log Entry 1====== 00:37:30.792 trtype: tcp 00:37:30.792 adrfam: ipv4 00:37:30.792 subtype: nvme subsystem 00:37:30.792 treq: not specified, sq flow control disable supported 00:37:30.792 portid: 1 00:37:30.792 trsvcid: 4420 00:37:30.792 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:30.792 traddr: 10.0.0.1 00:37:30.792 eflags: none 00:37:30.792 sectype: none 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:30.792 12:14:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:34.283 Initializing NVMe Controllers 00:37:34.283 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:34.283 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:34.283 Initialization complete. Launching workers. 00:37:34.283 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67688, failed: 0 00:37:34.283 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67688, failed to submit 0 00:37:34.283 success 0, unsuccessful 67688, failed 0 00:37:34.283 12:14:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:34.283 12:14:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:37.587 Initializing NVMe Controllers 00:37:37.587 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:37.587 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:37.587 Initialization complete. Launching workers. 00:37:37.587 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 115039, failed: 0 00:37:37.587 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28974, failed to submit 86065 00:37:37.587 success 0, unsuccessful 28974, failed 0 00:37:37.587 12:14:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:37.587 12:14:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:40.130 Initializing NVMe Controllers 00:37:40.130 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:40.130 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:40.130 Initialization complete. Launching workers. 00:37:40.130 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146519, failed: 0 00:37:40.130 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36690, failed to submit 109829 00:37:40.130 success 0, unsuccessful 36690, failed 0 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:40.130 12:14:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:44.336 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:44.337 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:45.720 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:45.980 00:37:45.980 real 0m20.463s 00:37:45.980 user 0m9.983s 00:37:45.980 sys 0m6.118s 00:37:45.980 12:14:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:45.980 12:14:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:45.980 ************************************ 00:37:45.981 END TEST kernel_target_abort 00:37:45.981 ************************************ 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:45.981 rmmod nvme_tcp 00:37:45.981 rmmod nvme_fabrics 00:37:45.981 rmmod nvme_keyring 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 2250962 ']' 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 2250962 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2250962 ']' 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2250962 00:37:45.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2250962) - No such process 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2250962 is not found' 00:37:45.981 Process with pid 2250962 is not found 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:45.981 12:14:48 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:49.284 Waiting for block devices as requested 00:37:49.284 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:49.545 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:49.545 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:49.545 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:49.806 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:49.806 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:49.806 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:50.067 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:50.068 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:50.328 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:50.328 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:50.328 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:50.588 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:50.588 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:50.588 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:50.848 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:50.848 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:51.109 12:14:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.655 12:14:55 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:53.655 00:37:53.655 real 0m52.690s 00:37:53.655 user 1m5.007s 00:37:53.655 sys 0m19.152s 00:37:53.655 12:14:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:53.655 12:14:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:53.655 ************************************ 00:37:53.655 END TEST nvmf_abort_qd_sizes 00:37:53.655 ************************************ 00:37:53.655 12:14:55 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:53.655 12:14:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:53.655 12:14:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:53.655 12:14:55 -- common/autotest_common.sh@10 -- # set +x 00:37:53.655 ************************************ 00:37:53.655 START TEST keyring_file 00:37:53.655 ************************************ 00:37:53.655 12:14:55 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:53.655 * Looking for test storage... 00:37:53.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:53.655 12:14:55 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:53.655 12:14:55 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:53.655 12:14:55 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:53.655 12:14:56 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:53.655 12:14:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:53.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.656 --rc genhtml_branch_coverage=1 00:37:53.656 --rc genhtml_function_coverage=1 00:37:53.656 --rc genhtml_legend=1 00:37:53.656 --rc geninfo_all_blocks=1 00:37:53.656 --rc geninfo_unexecuted_blocks=1 00:37:53.656 00:37:53.656 ' 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:53.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.656 --rc genhtml_branch_coverage=1 00:37:53.656 --rc genhtml_function_coverage=1 00:37:53.656 --rc genhtml_legend=1 00:37:53.656 --rc geninfo_all_blocks=1 00:37:53.656 --rc geninfo_unexecuted_blocks=1 00:37:53.656 00:37:53.656 ' 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:53.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.656 --rc genhtml_branch_coverage=1 00:37:53.656 --rc genhtml_function_coverage=1 00:37:53.656 --rc genhtml_legend=1 00:37:53.656 --rc geninfo_all_blocks=1 00:37:53.656 --rc geninfo_unexecuted_blocks=1 00:37:53.656 00:37:53.656 ' 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:53.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.656 --rc genhtml_branch_coverage=1 00:37:53.656 --rc genhtml_function_coverage=1 00:37:53.656 --rc genhtml_legend=1 00:37:53.656 --rc geninfo_all_blocks=1 00:37:53.656 --rc geninfo_unexecuted_blocks=1 00:37:53.656 00:37:53.656 ' 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.656 12:14:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.656 12:14:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.656 12:14:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.656 12:14:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.656 12:14:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:53.656 12:14:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:53.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hyz3ktoC0K 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hyz3ktoC0K 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hyz3ktoC0K 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.hyz3ktoC0K 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KMN3Y5G3QZ 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:53.656 12:14:56 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KMN3Y5G3QZ 00:37:53.656 12:14:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KMN3Y5G3QZ 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.KMN3Y5G3QZ 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=2261490 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2261490 00:37:53.656 12:14:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2261490 ']' 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:53.656 12:14:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.657 12:14:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:53.657 12:14:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:53.657 [2024-10-11 12:14:56.247146] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:37:53.657 [2024-10-11 12:14:56.247212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261490 ] 00:37:53.657 [2024-10-11 12:14:56.329772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.918 [2024-10-11 12:14:56.383701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:54.490 12:14:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:54.490 [2024-10-11 12:14:57.042161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:54.490 null0 00:37:54.490 [2024-10-11 12:14:57.074203] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:54.490 [2024-10-11 12:14:57.074741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:54.490 12:14:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.490 12:14:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:54.491 [2024-10-11 12:14:57.106266] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:54.491 request: 00:37:54.491 { 00:37:54.491 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.491 "secure_channel": false, 00:37:54.491 "listen_address": { 00:37:54.491 "trtype": "tcp", 00:37:54.491 "traddr": "127.0.0.1", 00:37:54.491 "trsvcid": "4420" 00:37:54.491 }, 00:37:54.491 "method": "nvmf_subsystem_add_listener", 00:37:54.491 "req_id": 1 00:37:54.491 } 00:37:54.491 Got JSON-RPC error response 00:37:54.491 response: 00:37:54.491 { 00:37:54.491 "code": -32602, 00:37:54.491 "message": "Invalid parameters" 00:37:54.491 } 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:54.491 12:14:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=2261529 00:37:54.491 12:14:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2261529 /var/tmp/bperf.sock 00:37:54.491 12:14:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2261529 ']' 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:54.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:54.491 12:14:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:54.491 [2024-10-11 12:14:57.165535] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:37:54.491 [2024-10-11 12:14:57.165592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261529 ] 00:37:54.752 [2024-10-11 12:14:57.245753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.752 [2024-10-11 12:14:57.292035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.325 12:14:57 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:55.325 12:14:57 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:55.325 12:14:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hyz3ktoC0K 00:37:55.325 12:14:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hyz3ktoC0K 00:37:55.585 12:14:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.KMN3Y5G3QZ 00:37:55.585 12:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.KMN3Y5G3QZ 00:37:55.846 12:14:58 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:55.846 12:14:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:55.846 12:14:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.846 12:14:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:55.846 12:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.846 12:14:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.hyz3ktoC0K == \/\t\m\p\/\t\m\p\.\h\y\z\3\k\t\o\C\0\K ]] 00:37:55.846 12:14:58 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:55.846 12:14:58 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:55.846 12:14:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.846 12:14:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:55.846 12:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.106 12:14:58 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.KMN3Y5G3QZ == \/\t\m\p\/\t\m\p\.\K\M\N\3\Y\5\G\3\Q\Z ]] 00:37:56.106 12:14:58 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:56.106 12:14:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:56.106 12:14:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:56.107 12:14:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:56.107 12:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.107 12:14:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:56.367 12:14:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:56.367 12:14:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:56.367 12:14:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:56.367 12:14:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:56.367 12:14:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:56.367 12:14:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.367 12:14:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:56.628 12:14:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:56.628 12:14:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.628 12:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:56.628 [2024-10-11 12:14:59.240705] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:56.628 nvme0n1 00:37:56.889 12:14:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:56.889 12:14:59 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:56.889 12:14:59 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.889 12:14:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:57.149 12:14:59 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:57.149 12:14:59 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:57.149 Running I/O for 1 seconds... 00:37:58.533 20656.00 IOPS, 80.69 MiB/s 00:37:58.533 Latency(us) 00:37:58.533 [2024-10-11T10:15:01.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.533 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:58.533 nvme0n1 : 1.00 20708.54 80.89 0.00 0.00 6171.34 3659.09 18240.85 00:37:58.533 [2024-10-11T10:15:01.236Z] =================================================================================================================== 00:37:58.533 [2024-10-11T10:15:01.236Z] Total : 20708.54 80.89 0.00 0.00 6171.34 3659.09 18240.85 00:37:58.533 { 00:37:58.533 "results": [ 00:37:58.533 { 00:37:58.533 "job": "nvme0n1", 00:37:58.533 "core_mask": "0x2", 00:37:58.533 "workload": "randrw", 00:37:58.533 "percentage": 50, 00:37:58.533 "status": "finished", 00:37:58.533 "queue_depth": 128, 00:37:58.533 "io_size": 4096, 00:37:58.533 "runtime": 1.003644, 00:37:58.533 "iops": 20708.538087210207, 00:37:58.533 "mibps": 80.89272690316487, 00:37:58.533 "io_failed": 0, 00:37:58.533 "io_timeout": 0, 00:37:58.533 "avg_latency_us": 6171.343453938927, 00:37:58.533 "min_latency_us": 3659.0933333333332, 00:37:58.533 "max_latency_us": 18240.853333333333 00:37:58.533 } 00:37:58.533 ], 00:37:58.533 "core_count": 1 00:37:58.533 } 00:37:58.533 12:15:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:58.533 12:15:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:58.533 12:15:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:58.533 12:15:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:58.533 12:15:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:58.533 12:15:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.533 12:15:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:58.533 12:15:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.795 12:15:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:58.795 12:15:01 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:58.795 12:15:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:58.795 12:15:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:58.795 12:15:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.795 12:15:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:58.795 12:15:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.795 12:15:01 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:58.795 12:15:01 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:58.795 12:15:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:58.795 12:15:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:58.795 12:15:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:58.795 12:15:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:58.795 12:15:01 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:58.795 12:15:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:58.795 12:15:01 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:58.795 12:15:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:59.056 [2024-10-11 12:15:01.576711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:59.056 [2024-10-11 12:15:01.577379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa68d40 (107): Transport endpoint is not connected 00:37:59.056 [2024-10-11 12:15:01.578376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa68d40 (9): Bad file descriptor 00:37:59.056 [2024-10-11 12:15:01.579377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:59.056 [2024-10-11 12:15:01.579390] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:59.056 [2024-10-11 12:15:01.579396] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:59.056 [2024-10-11 12:15:01.579403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:59.056 request: 00:37:59.056 { 00:37:59.056 "name": "nvme0", 00:37:59.056 "trtype": "tcp", 00:37:59.056 "traddr": "127.0.0.1", 00:37:59.056 "adrfam": "ipv4", 00:37:59.056 "trsvcid": "4420", 00:37:59.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:59.056 "prchk_reftag": false, 00:37:59.056 "prchk_guard": false, 00:37:59.056 "hdgst": false, 00:37:59.056 "ddgst": false, 00:37:59.056 "psk": "key1", 00:37:59.056 "allow_unrecognized_csi": false, 00:37:59.056 "method": "bdev_nvme_attach_controller", 00:37:59.056 "req_id": 1 00:37:59.056 } 00:37:59.056 Got JSON-RPC error response 00:37:59.056 response: 00:37:59.056 { 00:37:59.056 "code": -5, 00:37:59.056 "message": "Input/output error" 00:37:59.056 } 00:37:59.056 12:15:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:59.056 12:15:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:59.056 12:15:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:59.056 12:15:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:59.056 12:15:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:59.056 12:15:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:59.056 12:15:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:59.056 12:15:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.056 12:15:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:59.056 12:15:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:59.316 12:15:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:59.316 12:15:01 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:59.316 12:15:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:59.316 12:15:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:59.316 12:15:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.316 12:15:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:59.317 12:15:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:59.317 12:15:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:59.317 12:15:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:59.317 12:15:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:59.578 12:15:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:59.578 12:15:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:59.839 12:15:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:59.839 12:15:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:59.839 12:15:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:59.839 12:15:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:59.839 12:15:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.hyz3ktoC0K 00:37:59.839 12:15:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.hyz3ktoC0K 00:37:59.839 12:15:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:59.839 12:15:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.hyz3ktoC0K 00:37:59.839 12:15:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:59.839 12:15:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:59.839 12:15:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:59.839 12:15:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:59.839 12:15:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hyz3ktoC0K 00:37:59.839 12:15:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hyz3ktoC0K 00:38:00.100 [2024-10-11 12:15:02.650174] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hyz3ktoC0K': 0100660 00:38:00.100 [2024-10-11 12:15:02.650192] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:00.100 request: 00:38:00.100 { 00:38:00.100 "name": "key0", 00:38:00.100 "path": "/tmp/tmp.hyz3ktoC0K", 00:38:00.100 "method": "keyring_file_add_key", 00:38:00.100 "req_id": 1 00:38:00.100 } 00:38:00.100 Got JSON-RPC error response 00:38:00.100 response: 00:38:00.100 { 00:38:00.100 "code": -1, 00:38:00.100 "message": "Operation not permitted" 00:38:00.100 } 00:38:00.100 12:15:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:00.100 12:15:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:00.100 12:15:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:00.100 12:15:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:00.100 12:15:02 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.hyz3ktoC0K 00:38:00.100 12:15:02 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hyz3ktoC0K 00:38:00.100 12:15:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hyz3ktoC0K 00:38:00.361 12:15:02 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.hyz3ktoC0K 00:38:00.361 12:15:02 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:00.361 12:15:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:00.361 12:15:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.361 12:15:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.361 12:15:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:00.361 12:15:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.361 12:15:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:00.361 12:15:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.361 12:15:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:00.361 12:15:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.361 12:15:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:00.361 12:15:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:00.361 12:15:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:00.361 12:15:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:00.361 12:15:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.361 12:15:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.623 [2024-10-11 12:15:03.175513] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.hyz3ktoC0K': No such file or directory 00:38:00.623 [2024-10-11 12:15:03.175525] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:00.623 [2024-10-11 12:15:03.175538] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:00.623 [2024-10-11 12:15:03.175544] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:00.623 [2024-10-11 12:15:03.175550] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:00.623 [2024-10-11 12:15:03.175555] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:00.623 request: 00:38:00.623 { 00:38:00.623 "name": "nvme0", 00:38:00.623 "trtype": "tcp", 00:38:00.623 "traddr": "127.0.0.1", 00:38:00.623 "adrfam": "ipv4", 00:38:00.623 "trsvcid": "4420", 00:38:00.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:00.623 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:00.623 "prchk_reftag": false, 00:38:00.623 "prchk_guard": false, 00:38:00.623 "hdgst": false, 00:38:00.623 "ddgst": false, 00:38:00.623 "psk": "key0", 00:38:00.623 "allow_unrecognized_csi": false, 00:38:00.623 "method": "bdev_nvme_attach_controller", 00:38:00.623 "req_id": 1 00:38:00.623 } 00:38:00.623 Got JSON-RPC error response 00:38:00.623 response: 00:38:00.623 { 00:38:00.623 "code": -19, 00:38:00.623 "message": "No such device" 00:38:00.623 } 00:38:00.623 12:15:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:00.623 12:15:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:00.623 12:15:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:00.623 12:15:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:00.623 12:15:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:00.623 12:15:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:00.883 12:15:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FnEmzb4Dfi 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:00.883 12:15:03 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:00.883 12:15:03 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:38:00.883 12:15:03 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:00.883 12:15:03 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:38:00.883 12:15:03 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:38:00.883 12:15:03 keyring_file -- nvmf/common.sh@731 -- # python - 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FnEmzb4Dfi 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FnEmzb4Dfi 00:38:00.883 12:15:03 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.FnEmzb4Dfi 00:38:00.883 12:15:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FnEmzb4Dfi 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FnEmzb4Dfi 00:38:00.883 12:15:03 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:00.883 12:15:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:01.144 nvme0n1 00:38:01.144 12:15:03 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:01.144 12:15:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:01.144 12:15:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:01.145 12:15:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:01.145 12:15:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:01.145 12:15:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.406 12:15:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:01.406 12:15:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:01.406 12:15:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:01.666 12:15:04 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:01.666 12:15:04 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:01.666 12:15:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:01.666 12:15:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.666 12:15:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:01.927 12:15:04 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:01.927 12:15:04 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:01.927 12:15:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:01.927 12:15:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:01.927 12:15:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:01.927 12:15:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:01.927 12:15:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.927 12:15:04 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:01.927 12:15:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:01.927 12:15:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:02.188 12:15:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:02.188 12:15:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:02.188 12:15:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.449 12:15:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:02.449 12:15:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FnEmzb4Dfi 00:38:02.449 12:15:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FnEmzb4Dfi 00:38:02.449 12:15:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.KMN3Y5G3QZ 00:38:02.449 12:15:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.KMN3Y5G3QZ 00:38:02.710 12:15:05 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:02.710 12:15:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:02.970 nvme0n1 00:38:02.970 12:15:05 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:02.970 12:15:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:03.232 12:15:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:03.232 "subsystems": [ 00:38:03.232 { 00:38:03.232 "subsystem": "keyring", 00:38:03.232 "config": [ 00:38:03.232 { 00:38:03.232 "method": "keyring_file_add_key", 00:38:03.232 "params": { 00:38:03.232 "name": "key0", 00:38:03.232 "path": "/tmp/tmp.FnEmzb4Dfi" 00:38:03.232 } 00:38:03.232 }, 00:38:03.232 { 00:38:03.232 "method": "keyring_file_add_key", 00:38:03.232 "params": { 00:38:03.232 "name": "key1", 00:38:03.232 "path": "/tmp/tmp.KMN3Y5G3QZ" 00:38:03.232 } 00:38:03.232 } 00:38:03.232 ] 00:38:03.232 }, 00:38:03.232 { 00:38:03.232 "subsystem": "iobuf", 00:38:03.232 "config": [ 00:38:03.232 { 00:38:03.232 "method": "iobuf_set_options", 00:38:03.232 "params": { 00:38:03.232 "small_pool_count": 8192, 00:38:03.232 "large_pool_count": 1024, 00:38:03.232 "small_bufsize": 8192, 00:38:03.232 "large_bufsize": 135168 00:38:03.232 } 00:38:03.232 } 00:38:03.232 ] 00:38:03.232 }, 00:38:03.232 { 00:38:03.232 "subsystem": "sock", 00:38:03.232 "config": [ 00:38:03.232 { 00:38:03.232 "method": "sock_set_default_impl", 00:38:03.232 "params": { 00:38:03.232 "impl_name": "posix" 00:38:03.232 } 00:38:03.232 }, 00:38:03.232 { 00:38:03.233 "method": "sock_impl_set_options", 00:38:03.233 "params": { 00:38:03.233 "impl_name": "ssl", 00:38:03.233 "recv_buf_size": 4096, 00:38:03.233 "send_buf_size": 4096, 00:38:03.233 "enable_recv_pipe": true, 00:38:03.233 "enable_quickack": false, 00:38:03.233 "enable_placement_id": 0, 00:38:03.233 "enable_zerocopy_send_server": true, 00:38:03.233 "enable_zerocopy_send_client": false, 00:38:03.233 "zerocopy_threshold": 0, 00:38:03.233 "tls_version": 0, 00:38:03.233 "enable_ktls": false 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "method": "sock_impl_set_options", 00:38:03.233 "params": { 00:38:03.233 "impl_name": "posix", 00:38:03.233 "recv_buf_size": 2097152, 00:38:03.233 "send_buf_size": 2097152, 00:38:03.233 "enable_recv_pipe": true, 00:38:03.233 "enable_quickack": false, 00:38:03.233 "enable_placement_id": 0, 00:38:03.233 "enable_zerocopy_send_server": true, 00:38:03.233 "enable_zerocopy_send_client": false, 00:38:03.233 "zerocopy_threshold": 0, 00:38:03.233 "tls_version": 0, 00:38:03.233 "enable_ktls": false 00:38:03.233 } 00:38:03.233 } 00:38:03.233 ] 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "subsystem": "vmd", 00:38:03.233 "config": [] 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "subsystem": "accel", 00:38:03.233 "config": [ 00:38:03.233 { 00:38:03.233 "method": "accel_set_options", 00:38:03.233 "params": { 00:38:03.233 "small_cache_size": 128, 00:38:03.233 "large_cache_size": 16, 00:38:03.233 "task_count": 2048, 00:38:03.233 "sequence_count": 2048, 00:38:03.233 "buf_count": 2048 00:38:03.233 } 00:38:03.233 } 00:38:03.233 ] 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "subsystem": "bdev", 00:38:03.233 "config": [ 00:38:03.233 { 00:38:03.233 "method": "bdev_set_options", 00:38:03.233 "params": { 00:38:03.233 "bdev_io_pool_size": 65535, 00:38:03.233 "bdev_io_cache_size": 256, 00:38:03.233 "bdev_auto_examine": true, 00:38:03.233 "iobuf_small_cache_size": 128, 00:38:03.233 "iobuf_large_cache_size": 16 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "method": "bdev_raid_set_options", 00:38:03.233 "params": { 00:38:03.233 "process_window_size_kb": 1024, 00:38:03.233 "process_max_bandwidth_mb_sec": 0 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "method": "bdev_iscsi_set_options", 00:38:03.233 "params": { 00:38:03.233 "timeout_sec": 30 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "method": "bdev_nvme_set_options", 00:38:03.233 "params": { 00:38:03.233 "action_on_timeout": "none", 00:38:03.233 "timeout_us": 0, 00:38:03.233 "timeout_admin_us": 0, 00:38:03.233 "keep_alive_timeout_ms": 10000, 00:38:03.233 "arbitration_burst": 0, 00:38:03.233 "low_priority_weight": 0, 00:38:03.233 "medium_priority_weight": 0, 00:38:03.233 "high_priority_weight": 0, 00:38:03.233 "nvme_adminq_poll_period_us": 10000, 00:38:03.233 "nvme_ioq_poll_period_us": 0, 00:38:03.233 "io_queue_requests": 512, 00:38:03.233 "delay_cmd_submit": true, 00:38:03.233 "transport_retry_count": 4, 00:38:03.233 "bdev_retry_count": 3, 00:38:03.233 "transport_ack_timeout": 0, 00:38:03.233 "ctrlr_loss_timeout_sec": 0, 00:38:03.233 "reconnect_delay_sec": 0, 00:38:03.233 "fast_io_fail_timeout_sec": 0, 00:38:03.233 "disable_auto_failback": false, 00:38:03.233 "generate_uuids": false, 00:38:03.233 "transport_tos": 0, 00:38:03.233 "nvme_error_stat": false, 00:38:03.233 "rdma_srq_size": 0, 00:38:03.233 "io_path_stat": false, 00:38:03.233 "allow_accel_sequence": false, 00:38:03.233 "rdma_max_cq_size": 0, 00:38:03.233 "rdma_cm_event_timeout_ms": 0, 00:38:03.233 "dhchap_digests": [ 00:38:03.233 "sha256", 00:38:03.233 "sha384", 00:38:03.233 "sha512" 00:38:03.233 ], 00:38:03.233 "dhchap_dhgroups": [ 00:38:03.233 "null", 00:38:03.233 "ffdhe2048", 00:38:03.233 "ffdhe3072", 00:38:03.233 "ffdhe4096", 00:38:03.233 "ffdhe6144", 00:38:03.233 "ffdhe8192" 00:38:03.233 ] 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "method": "bdev_nvme_attach_controller", 00:38:03.233 "params": { 00:38:03.233 "name": "nvme0", 00:38:03.233 "trtype": "TCP", 00:38:03.233 "adrfam": "IPv4", 00:38:03.233 "traddr": "127.0.0.1", 00:38:03.233 "trsvcid": "4420", 00:38:03.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.233 "prchk_reftag": false, 00:38:03.233 "prchk_guard": false, 00:38:03.233 "ctrlr_loss_timeout_sec": 0, 00:38:03.233 "reconnect_delay_sec": 0, 00:38:03.233 "fast_io_fail_timeout_sec": 0, 00:38:03.233 "psk": "key0", 00:38:03.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:03.233 "hdgst": false, 00:38:03.233 "ddgst": false, 00:38:03.233 "multipath": "multipath" 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "method": "bdev_nvme_set_hotplug", 00:38:03.233 "params": { 00:38:03.233 "period_us": 100000, 00:38:03.233 "enable": false 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "method": "bdev_wait_for_examine" 00:38:03.233 } 00:38:03.233 ] 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "subsystem": "nbd", 00:38:03.233 "config": [] 00:38:03.233 } 00:38:03.233 ] 00:38:03.233 }' 00:38:03.233 12:15:05 keyring_file -- keyring/file.sh@115 -- # killprocess 2261529 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2261529 ']' 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2261529 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2261529 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2261529' 00:38:03.233 killing process with pid 2261529 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@969 -- # kill 2261529 00:38:03.233 Received shutdown signal, test time was about 1.000000 seconds 00:38:03.233 00:38:03.233 Latency(us) 00:38:03.233 [2024-10-11T10:15:05.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.233 [2024-10-11T10:15:05.936Z] =================================================================================================================== 00:38:03.233 [2024-10-11T10:15:05.936Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@974 -- # wait 2261529 00:38:03.233 12:15:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=2263446 00:38:03.233 12:15:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2263446 /var/tmp/bperf.sock 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2263446 ']' 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:03.233 12:15:05 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:03.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:03.233 12:15:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:03.233 12:15:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:03.233 "subsystems": [ 00:38:03.233 { 00:38:03.233 "subsystem": "keyring", 00:38:03.233 "config": [ 00:38:03.233 { 00:38:03.233 "method": "keyring_file_add_key", 00:38:03.233 "params": { 00:38:03.233 "name": "key0", 00:38:03.233 "path": "/tmp/tmp.FnEmzb4Dfi" 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "method": "keyring_file_add_key", 00:38:03.233 "params": { 00:38:03.233 "name": "key1", 00:38:03.233 "path": "/tmp/tmp.KMN3Y5G3QZ" 00:38:03.233 } 00:38:03.233 } 00:38:03.233 ] 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "subsystem": "iobuf", 00:38:03.233 "config": [ 00:38:03.233 { 00:38:03.233 "method": "iobuf_set_options", 00:38:03.233 "params": { 00:38:03.233 "small_pool_count": 8192, 00:38:03.233 "large_pool_count": 1024, 00:38:03.233 "small_bufsize": 8192, 00:38:03.233 "large_bufsize": 135168 00:38:03.233 } 00:38:03.233 } 00:38:03.233 ] 00:38:03.233 }, 00:38:03.233 { 00:38:03.233 "subsystem": "sock", 00:38:03.233 "config": [ 00:38:03.233 { 00:38:03.233 "method": "sock_set_default_impl", 00:38:03.233 "params": { 00:38:03.233 "impl_name": "posix" 00:38:03.233 } 00:38:03.233 }, 00:38:03.233 { 00:38:03.234 "method": "sock_impl_set_options", 00:38:03.234 "params": { 00:38:03.234 "impl_name": "ssl", 00:38:03.234 "recv_buf_size": 4096, 00:38:03.234 "send_buf_size": 4096, 00:38:03.234 "enable_recv_pipe": true, 00:38:03.234 "enable_quickack": false, 00:38:03.234 "enable_placement_id": 0, 00:38:03.234 "enable_zerocopy_send_server": true, 00:38:03.234 "enable_zerocopy_send_client": false, 00:38:03.234 "zerocopy_threshold": 0, 00:38:03.234 "tls_version": 0, 00:38:03.234 "enable_ktls": false 00:38:03.234 } 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "method": "sock_impl_set_options", 00:38:03.234 "params": { 00:38:03.234 "impl_name": "posix", 00:38:03.234 "recv_buf_size": 2097152, 00:38:03.234 "send_buf_size": 2097152, 00:38:03.234 "enable_recv_pipe": true, 00:38:03.234 "enable_quickack": false, 00:38:03.234 "enable_placement_id": 0, 00:38:03.234 "enable_zerocopy_send_server": true, 00:38:03.234 "enable_zerocopy_send_client": false, 00:38:03.234 "zerocopy_threshold": 0, 00:38:03.234 "tls_version": 0, 00:38:03.234 "enable_ktls": false 00:38:03.234 } 00:38:03.234 } 00:38:03.234 ] 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "subsystem": "vmd", 00:38:03.234 "config": [] 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "subsystem": "accel", 00:38:03.234 "config": [ 00:38:03.234 { 00:38:03.234 "method": "accel_set_options", 00:38:03.234 "params": { 00:38:03.234 "small_cache_size": 128, 00:38:03.234 "large_cache_size": 16, 00:38:03.234 "task_count": 2048, 00:38:03.234 "sequence_count": 2048, 00:38:03.234 "buf_count": 2048 00:38:03.234 } 00:38:03.234 } 00:38:03.234 ] 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "subsystem": "bdev", 00:38:03.234 "config": [ 00:38:03.234 { 00:38:03.234 "method": "bdev_set_options", 00:38:03.234 "params": { 00:38:03.234 "bdev_io_pool_size": 65535, 00:38:03.234 "bdev_io_cache_size": 256, 00:38:03.234 "bdev_auto_examine": true, 00:38:03.234 "iobuf_small_cache_size": 128, 00:38:03.234 "iobuf_large_cache_size": 16 00:38:03.234 } 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "method": "bdev_raid_set_options", 00:38:03.234 "params": { 00:38:03.234 "process_window_size_kb": 1024, 00:38:03.234 "process_max_bandwidth_mb_sec": 0 00:38:03.234 } 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "method": "bdev_iscsi_set_options", 00:38:03.234 "params": { 00:38:03.234 "timeout_sec": 30 00:38:03.234 } 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "method": "bdev_nvme_set_options", 00:38:03.234 "params": { 00:38:03.234 "action_on_timeout": "none", 00:38:03.234 "timeout_us": 0, 00:38:03.234 "timeout_admin_us": 0, 00:38:03.234 "keep_alive_timeout_ms": 10000, 00:38:03.234 "arbitration_burst": 0, 00:38:03.234 "low_priority_weight": 0, 00:38:03.234 "medium_priority_weight": 0, 00:38:03.234 "high_priority_weight": 0, 00:38:03.234 "nvme_adminq_poll_period_us": 10000, 00:38:03.234 "nvme_ioq_poll_period_us": 0, 00:38:03.234 "io_queue_requests": 512, 00:38:03.234 "delay_cmd_submit": true, 00:38:03.234 "transport_retry_count": 4, 00:38:03.234 "bdev_retry_count": 3, 00:38:03.234 "transport_ack_timeout": 0, 00:38:03.234 "ctrlr_loss_timeout_sec": 0, 00:38:03.234 "reconnect_delay_sec": 0, 00:38:03.234 "fast_io_fail_timeout_sec": 0, 00:38:03.234 "disable_auto_failback": false, 00:38:03.234 "generate_uuids": false, 00:38:03.234 "transport_tos": 0, 00:38:03.234 "nvme_error_stat": false, 00:38:03.234 "rdma_srq_size": 0, 00:38:03.234 "io_path_stat": false, 00:38:03.234 12:15:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:03.234 "allow_accel_sequence": false, 00:38:03.234 "rdma_max_cq_size": 0, 00:38:03.234 "rdma_cm_event_timeout_ms": 0, 00:38:03.234 "dhchap_digests": [ 00:38:03.234 "sha256", 00:38:03.234 "sha384", 00:38:03.234 "sha512" 00:38:03.234 ], 00:38:03.234 "dhchap_dhgroups": [ 00:38:03.234 "null", 00:38:03.234 "ffdhe2048", 00:38:03.234 "ffdhe3072", 00:38:03.234 "ffdhe4096", 00:38:03.234 "ffdhe6144", 00:38:03.234 "ffdhe8192" 00:38:03.234 ] 00:38:03.234 } 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "method": "bdev_nvme_attach_controller", 00:38:03.234 "params": { 00:38:03.234 "name": "nvme0", 00:38:03.234 "trtype": "TCP", 00:38:03.234 "adrfam": "IPv4", 00:38:03.234 "traddr": "127.0.0.1", 00:38:03.234 "trsvcid": "4420", 00:38:03.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.234 "prchk_reftag": false, 00:38:03.234 "prchk_guard": false, 00:38:03.234 "ctrlr_loss_timeout_sec": 0, 00:38:03.234 "reconnect_delay_sec": 0, 00:38:03.234 "fast_io_fail_timeout_sec": 0, 00:38:03.234 "psk": "key0", 00:38:03.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:03.234 "hdgst": false, 00:38:03.234 "ddgst": false, 00:38:03.234 "multipath": "multipath" 00:38:03.234 } 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "method": "bdev_nvme_set_hotplug", 00:38:03.234 "params": { 00:38:03.234 "period_us": 100000, 00:38:03.234 "enable": false 00:38:03.234 } 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "method": "bdev_wait_for_examine" 00:38:03.234 } 00:38:03.234 ] 00:38:03.234 }, 00:38:03.234 { 00:38:03.234 "subsystem": "nbd", 00:38:03.234 "config": [] 00:38:03.234 } 00:38:03.234 ] 00:38:03.234 }' 00:38:03.495 [2024-10-11 12:15:05.952782] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:38:03.495 [2024-10-11 12:15:05.952838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263446 ] 00:38:03.495 [2024-10-11 12:15:06.028677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.495 [2024-10-11 12:15:06.058090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.755 [2024-10-11 12:15:06.201051] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:04.326 12:15:06 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:04.326 12:15:06 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:38:04.326 12:15:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:04.326 12:15:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:04.326 12:15:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.326 12:15:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:04.326 12:15:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:04.326 12:15:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:04.326 12:15:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:04.326 12:15:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:04.326 12:15:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.326 12:15:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:04.619 12:15:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:04.619 12:15:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:04.619 12:15:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:04.619 12:15:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:04.619 12:15:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:04.619 12:15:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.619 12:15:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:04.619 12:15:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:04.619 12:15:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:04.619 12:15:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:04.619 12:15:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:04.914 12:15:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:04.914 12:15:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:04.914 12:15:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FnEmzb4Dfi /tmp/tmp.KMN3Y5G3QZ 00:38:04.914 12:15:07 keyring_file -- keyring/file.sh@20 -- # killprocess 2263446 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2263446 ']' 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2263446 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2263446 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2263446' 00:38:04.914 killing process with pid 2263446 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@969 -- # kill 2263446 00:38:04.914 Received shutdown signal, test time was about 1.000000 seconds 00:38:04.914 00:38:04.914 Latency(us) 00:38:04.914 [2024-10-11T10:15:07.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.914 [2024-10-11T10:15:07.617Z] =================================================================================================================== 00:38:04.914 [2024-10-11T10:15:07.617Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@974 -- # wait 2263446 00:38:04.914 12:15:07 keyring_file -- keyring/file.sh@21 -- # killprocess 2261490 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2261490 ']' 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2261490 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@955 -- # uname 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:04.914 12:15:07 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2261490 00:38:05.176 12:15:07 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:05.176 12:15:07 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:05.176 12:15:07 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2261490' 00:38:05.176 killing process with pid 2261490 00:38:05.176 12:15:07 keyring_file -- common/autotest_common.sh@969 -- # kill 2261490 00:38:05.176 12:15:07 keyring_file -- common/autotest_common.sh@974 -- # wait 2261490 00:38:05.176 00:38:05.176 real 0m11.995s 00:38:05.176 user 0m29.097s 00:38:05.176 sys 0m2.643s 00:38:05.176 12:15:07 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:05.176 12:15:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:05.176 ************************************ 00:38:05.176 END TEST keyring_file 00:38:05.176 ************************************ 00:38:05.438 12:15:07 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:05.438 12:15:07 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:05.438 12:15:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:05.438 12:15:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:05.438 12:15:07 -- common/autotest_common.sh@10 -- # set +x 00:38:05.438 ************************************ 00:38:05.438 START TEST keyring_linux 00:38:05.438 ************************************ 00:38:05.438 12:15:07 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:05.438 Joined session keyring: 762101666 00:38:05.438 * Looking for test storage... 00:38:05.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:05.438 12:15:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:05.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.438 --rc genhtml_branch_coverage=1 00:38:05.438 --rc genhtml_function_coverage=1 00:38:05.438 --rc genhtml_legend=1 00:38:05.438 --rc geninfo_all_blocks=1 00:38:05.438 --rc geninfo_unexecuted_blocks=1 00:38:05.438 00:38:05.438 ' 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:05.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.438 --rc genhtml_branch_coverage=1 00:38:05.438 --rc genhtml_function_coverage=1 00:38:05.438 --rc genhtml_legend=1 00:38:05.438 --rc geninfo_all_blocks=1 00:38:05.438 --rc geninfo_unexecuted_blocks=1 00:38:05.438 00:38:05.438 ' 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:05.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.438 --rc genhtml_branch_coverage=1 00:38:05.438 --rc genhtml_function_coverage=1 00:38:05.438 --rc genhtml_legend=1 00:38:05.438 --rc geninfo_all_blocks=1 00:38:05.438 --rc geninfo_unexecuted_blocks=1 00:38:05.438 00:38:05.438 ' 00:38:05.438 12:15:08 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:05.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.438 --rc genhtml_branch_coverage=1 00:38:05.438 --rc genhtml_function_coverage=1 00:38:05.438 --rc genhtml_legend=1 00:38:05.438 --rc geninfo_all_blocks=1 00:38:05.438 --rc geninfo_unexecuted_blocks=1 00:38:05.438 00:38:05.438 ' 00:38:05.438 12:15:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:05.438 12:15:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:05.438 12:15:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:05.701 12:15:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:05.701 12:15:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:05.701 12:15:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:05.701 12:15:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:05.701 12:15:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.701 12:15:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.701 12:15:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.701 12:15:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:05.701 12:15:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:05.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@731 -- # python - 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:05.701 /tmp/:spdk-test:key0 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:38:05.701 12:15:08 keyring_linux -- nvmf/common.sh@731 -- # python - 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:05.701 12:15:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:05.701 /tmp/:spdk-test:key1 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2263904 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2263904 00:38:05.701 12:15:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:05.701 12:15:08 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2263904 ']' 00:38:05.701 12:15:08 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:05.701 12:15:08 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:05.701 12:15:08 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:05.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:05.701 12:15:08 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:05.701 12:15:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:05.701 [2024-10-11 12:15:08.312197] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:38:05.701 [2024-10-11 12:15:08.312255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263904 ] 00:38:05.701 [2024-10-11 12:15:08.388861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.963 [2024-10-11 12:15:08.419696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:38:06.536 12:15:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:06.536 [2024-10-11 12:15:09.088892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:06.536 null0 00:38:06.536 [2024-10-11 12:15:09.120956] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:06.536 [2024-10-11 12:15:09.121309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.536 12:15:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:06.536 182746256 00:38:06.536 12:15:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:06.536 272494518 00:38:06.536 12:15:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2264443 00:38:06.536 12:15:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2264443 /var/tmp/bperf.sock 00:38:06.536 12:15:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2264443 ']' 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:06.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:06.536 12:15:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:06.536 [2024-10-11 12:15:09.209854] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:38:06.536 [2024-10-11 12:15:09.209899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264443 ] 00:38:06.796 [2024-10-11 12:15:09.285273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.796 [2024-10-11 12:15:09.315611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:07.367 12:15:09 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:07.367 12:15:09 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:38:07.367 12:15:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:07.367 12:15:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:07.628 12:15:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:07.628 12:15:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:07.888 12:15:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:07.888 12:15:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:07.888 [2024-10-11 12:15:10.532045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:08.150 nvme0n1 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:08.150 12:15:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:08.150 12:15:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:08.150 12:15:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.150 12:15:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.150 12:15:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:08.410 12:15:10 keyring_linux -- keyring/linux.sh@25 -- # sn=182746256 00:38:08.410 12:15:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:08.410 12:15:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:08.410 12:15:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 182746256 == \1\8\2\7\4\6\2\5\6 ]] 00:38:08.410 12:15:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 182746256 00:38:08.410 12:15:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:08.411 12:15:10 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:08.411 Running I/O for 1 seconds... 00:38:09.795 24405.00 IOPS, 95.33 MiB/s 00:38:09.795 Latency(us) 00:38:09.795 [2024-10-11T10:15:12.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:09.795 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:09.795 nvme0n1 : 1.01 24405.48 95.33 0.00 0.00 5229.20 4369.07 9011.20 00:38:09.795 [2024-10-11T10:15:12.498Z] =================================================================================================================== 00:38:09.795 [2024-10-11T10:15:12.498Z] Total : 24405.48 95.33 0.00 0.00 5229.20 4369.07 9011.20 00:38:09.795 { 00:38:09.795 "results": [ 00:38:09.795 { 00:38:09.795 "job": "nvme0n1", 00:38:09.795 "core_mask": "0x2", 00:38:09.795 "workload": "randread", 00:38:09.795 "status": "finished", 00:38:09.795 "queue_depth": 128, 00:38:09.795 "io_size": 4096, 00:38:09.795 "runtime": 1.005225, 00:38:09.795 "iops": 24405.48135989455, 00:38:09.795 "mibps": 95.33391156208809, 00:38:09.795 "io_failed": 0, 00:38:09.795 "io_timeout": 0, 00:38:09.795 "avg_latency_us": 5229.201066318836, 00:38:09.795 "min_latency_us": 4369.066666666667, 00:38:09.795 "max_latency_us": 9011.2 00:38:09.795 } 00:38:09.795 ], 00:38:09.795 "core_count": 1 00:38:09.795 } 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:09.795 12:15:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:09.795 12:15:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:09.795 12:15:12 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:09.795 12:15:12 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:09.795 12:15:12 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:09.795 12:15:12 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:09.795 12:15:12 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.795 12:15:12 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:09.795 12:15:12 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:09.795 12:15:12 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:09.795 12:15:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:10.057 [2024-10-11 12:15:12.610316] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:10.057 [2024-10-11 12:15:12.610728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe69750 (107): Transport endpoint is not connected 00:38:10.057 [2024-10-11 12:15:12.611724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe69750 (9): Bad file descriptor 00:38:10.057 [2024-10-11 12:15:12.612725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:10.057 [2024-10-11 12:15:12.612736] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:10.057 [2024-10-11 12:15:12.612742] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:10.057 [2024-10-11 12:15:12.612748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:10.057 request: 00:38:10.057 { 00:38:10.057 "name": "nvme0", 00:38:10.057 "trtype": "tcp", 00:38:10.057 "traddr": "127.0.0.1", 00:38:10.057 "adrfam": "ipv4", 00:38:10.057 "trsvcid": "4420", 00:38:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:10.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:10.057 "prchk_reftag": false, 00:38:10.057 "prchk_guard": false, 00:38:10.057 "hdgst": false, 00:38:10.057 "ddgst": false, 00:38:10.057 "psk": ":spdk-test:key1", 00:38:10.057 "allow_unrecognized_csi": false, 00:38:10.057 "method": "bdev_nvme_attach_controller", 00:38:10.057 "req_id": 1 00:38:10.057 } 00:38:10.057 Got JSON-RPC error response 00:38:10.057 response: 00:38:10.057 { 00:38:10.057 "code": -5, 00:38:10.057 "message": "Input/output error" 00:38:10.057 } 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@33 -- # sn=182746256 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 182746256 00:38:10.057 1 links removed 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@33 -- # sn=272494518 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 272494518 00:38:10.057 1 links removed 00:38:10.057 12:15:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2264443 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2264443 ']' 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2264443 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2264443 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2264443' 00:38:10.057 killing process with pid 2264443 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@969 -- # kill 2264443 00:38:10.057 Received shutdown signal, test time was about 1.000000 seconds 00:38:10.057 00:38:10.057 Latency(us) 00:38:10.057 [2024-10-11T10:15:12.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.057 [2024-10-11T10:15:12.760Z] =================================================================================================================== 00:38:10.057 [2024-10-11T10:15:12.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:10.057 12:15:12 keyring_linux -- common/autotest_common.sh@974 -- # wait 2264443 00:38:10.318 12:15:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2263904 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2263904 ']' 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2263904 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2263904 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2263904' 00:38:10.318 killing process with pid 2263904 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@969 -- # kill 2263904 00:38:10.318 12:15:12 keyring_linux -- common/autotest_common.sh@974 -- # wait 2263904 00:38:10.579 00:38:10.579 real 0m5.137s 00:38:10.579 user 0m9.506s 00:38:10.579 sys 0m1.465s 00:38:10.579 12:15:13 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:10.579 12:15:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:10.579 ************************************ 00:38:10.579 END TEST keyring_linux 00:38:10.579 ************************************ 00:38:10.579 12:15:13 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:10.579 12:15:13 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:10.579 12:15:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:10.579 12:15:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:10.579 12:15:13 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:10.579 12:15:13 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:10.579 12:15:13 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:10.579 12:15:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:10.579 12:15:13 -- common/autotest_common.sh@10 -- # set +x 00:38:10.579 12:15:13 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:10.579 12:15:13 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:38:10.579 12:15:13 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:38:10.579 12:15:13 -- common/autotest_common.sh@10 -- # set +x 00:38:18.727 INFO: APP EXITING 00:38:18.727 INFO: killing all VMs 00:38:18.727 INFO: killing vhost app 00:38:18.727 INFO: EXIT DONE 00:38:22.030 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:22.030 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:22.030 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:22.031 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:22.031 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:22.291 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:22.291 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:26.499 Cleaning 00:38:26.499 Removing: /var/run/dpdk/spdk0/config 00:38:26.499 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:26.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:26.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:26.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:26.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:26.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:26.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:26.500 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:26.500 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:26.500 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:26.500 Removing: /var/run/dpdk/spdk1/config 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:26.500 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:26.500 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:26.500 Removing: /var/run/dpdk/spdk2/config 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:26.500 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:26.500 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:26.500 Removing: /var/run/dpdk/spdk3/config 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:26.500 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:26.500 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:26.500 Removing: /var/run/dpdk/spdk4/config 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:26.500 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:26.500 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:26.500 Removing: /dev/shm/bdev_svc_trace.1 00:38:26.500 Removing: /dev/shm/nvmf_trace.0 00:38:26.500 Removing: /dev/shm/spdk_tgt_trace.pid1684360 00:38:26.500 Removing: /var/run/dpdk/spdk0 00:38:26.500 Removing: /var/run/dpdk/spdk1 00:38:26.500 Removing: /var/run/dpdk/spdk2 00:38:26.500 Removing: /var/run/dpdk/spdk3 00:38:26.500 Removing: /var/run/dpdk/spdk4 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1682873 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1684360 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1685208 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1686249 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1686589 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1687658 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1687970 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1688139 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1689269 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1690055 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1690447 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1690812 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1691164 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1691479 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1691701 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1692049 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1692439 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1693508 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1697096 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1697439 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1697726 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1697844 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1698259 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1698550 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1698921 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1699105 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1699343 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1699694 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1699895 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1700116 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1700565 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1700920 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1701319 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1706429 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1711804 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1724081 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1724922 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1730119 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1730604 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1735894 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1743048 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1746335 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1759691 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1770873 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1772898 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1774093 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1795211 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1800244 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1857362 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1864048 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1871843 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1879126 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1879128 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1880132 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1881139 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1882148 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1882815 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1882872 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1883157 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1883368 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1883491 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1884496 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1885502 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1886509 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1887179 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1887184 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1887518 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1888954 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1890136 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1900090 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1934947 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1940423 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1942424 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1944682 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1944877 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1945133 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1945473 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1946194 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1948365 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1949725 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1950185 00:38:26.500 Removing: /var/run/dpdk/spdk_pid1953268 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1953983 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1954959 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1960054 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1966872 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1966874 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1966876 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1971660 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1982028 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1986852 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1994456 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1995959 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1997537 00:38:26.761 Removing: /var/run/dpdk/spdk_pid1999322 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2005719 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2010830 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2020056 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2020059 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2025411 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2025529 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2025840 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2026507 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2026512 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2032037 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2032782 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2038281 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2041419 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2048109 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2054740 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2065497 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2074352 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2074354 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2097767 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2098466 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2099324 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2099874 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2100878 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2101666 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2102477 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2103001 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2108381 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2108807 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2116518 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2116727 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2123369 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2128675 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2140355 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2141028 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2146150 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2146520 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2151710 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2158712 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2162296 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2174635 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2185644 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2187597 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2188678 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2208633 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2213546 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2217433 00:38:26.761 Removing: /var/run/dpdk/spdk_pid2225195 00:38:27.022 Removing: /var/run/dpdk/spdk_pid2225272 00:38:27.022 Removing: /var/run/dpdk/spdk_pid2231244 00:38:27.022 Removing: /var/run/dpdk/spdk_pid2233640 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2235964 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2237274 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2239672 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2241145 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2251254 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2251916 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2252582 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2255536 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2255990 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2256585 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2261490 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2261529 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2263446 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2263904 00:38:27.023 Removing: /var/run/dpdk/spdk_pid2264443 00:38:27.023 Clean 00:38:27.023 12:15:29 -- common/autotest_common.sh@1451 -- # return 0 00:38:27.023 12:15:29 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:27.023 12:15:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:27.023 12:15:29 -- common/autotest_common.sh@10 -- # set +x 00:38:27.023 12:15:29 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:27.023 12:15:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:27.023 12:15:29 -- common/autotest_common.sh@10 -- # set +x 00:38:27.023 12:15:29 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:27.023 12:15:29 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:27.023 12:15:29 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:27.023 12:15:29 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:27.023 12:15:29 -- spdk/autotest.sh@394 -- # hostname 00:38:27.284 12:15:29 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:27.284 geninfo: WARNING: invalid characters removed from testname! 00:38:53.866 12:15:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:55.775 12:15:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:57.688 12:16:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:00.229 12:16:02 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:01.609 12:16:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:02.990 12:16:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:04.901 12:16:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:04.901 12:16:07 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:39:04.901 12:16:07 -- common/autotest_common.sh@1691 -- $ lcov --version 00:39:04.901 12:16:07 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:39:04.901 12:16:07 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:39:04.901 12:16:07 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:39:04.901 12:16:07 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:39:04.901 12:16:07 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:39:04.901 12:16:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:39:04.901 12:16:07 -- scripts/common.sh@336 -- $ read -ra ver1 00:39:04.901 12:16:07 -- scripts/common.sh@337 -- $ IFS=.-: 00:39:04.901 12:16:07 -- scripts/common.sh@337 -- $ read -ra ver2 00:39:04.901 12:16:07 -- scripts/common.sh@338 -- $ local 'op=<' 00:39:04.901 12:16:07 -- scripts/common.sh@340 -- $ ver1_l=2 00:39:04.901 12:16:07 -- scripts/common.sh@341 -- $ ver2_l=1 00:39:04.901 12:16:07 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:39:04.901 12:16:07 -- scripts/common.sh@344 -- $ case "$op" in 00:39:04.901 12:16:07 -- scripts/common.sh@345 -- $ : 1 00:39:04.901 12:16:07 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:39:04.901 12:16:07 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:04.901 12:16:07 -- scripts/common.sh@365 -- $ decimal 1 00:39:04.901 12:16:07 -- scripts/common.sh@353 -- $ local d=1 00:39:04.901 12:16:07 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:39:04.901 12:16:07 -- scripts/common.sh@355 -- $ echo 1 00:39:04.901 12:16:07 -- scripts/common.sh@365 -- $ ver1[v]=1 00:39:04.901 12:16:07 -- scripts/common.sh@366 -- $ decimal 2 00:39:04.901 12:16:07 -- scripts/common.sh@353 -- $ local d=2 00:39:04.901 12:16:07 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:39:04.901 12:16:07 -- scripts/common.sh@355 -- $ echo 2 00:39:04.901 12:16:07 -- scripts/common.sh@366 -- $ ver2[v]=2 00:39:04.901 12:16:07 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:39:04.901 12:16:07 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:39:04.901 12:16:07 -- scripts/common.sh@368 -- $ return 0 00:39:04.901 12:16:07 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:04.901 12:16:07 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:39:04.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.901 --rc genhtml_branch_coverage=1 00:39:04.901 --rc genhtml_function_coverage=1 00:39:04.901 --rc genhtml_legend=1 00:39:04.901 --rc geninfo_all_blocks=1 00:39:04.901 --rc geninfo_unexecuted_blocks=1 00:39:04.901 00:39:04.901 ' 00:39:04.901 12:16:07 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:39:04.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.901 --rc genhtml_branch_coverage=1 00:39:04.901 --rc genhtml_function_coverage=1 00:39:04.901 --rc genhtml_legend=1 00:39:04.901 --rc geninfo_all_blocks=1 00:39:04.901 --rc geninfo_unexecuted_blocks=1 00:39:04.901 00:39:04.901 ' 00:39:04.901 12:16:07 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:39:04.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.901 --rc genhtml_branch_coverage=1 00:39:04.901 --rc genhtml_function_coverage=1 00:39:04.901 --rc genhtml_legend=1 00:39:04.901 --rc geninfo_all_blocks=1 00:39:04.901 --rc geninfo_unexecuted_blocks=1 00:39:04.901 00:39:04.901 ' 00:39:04.901 12:16:07 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:39:04.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:04.901 --rc genhtml_branch_coverage=1 00:39:04.901 --rc genhtml_function_coverage=1 00:39:04.901 --rc genhtml_legend=1 00:39:04.901 --rc geninfo_all_blocks=1 00:39:04.901 --rc geninfo_unexecuted_blocks=1 00:39:04.901 00:39:04.901 ' 00:39:04.901 12:16:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:04.901 12:16:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:39:04.901 12:16:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:04.901 12:16:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:04.901 12:16:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:04.901 12:16:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.901 12:16:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.901 12:16:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.901 12:16:07 -- paths/export.sh@5 -- $ export PATH 00:39:04.901 12:16:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:04.901 12:16:07 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:04.901 12:16:07 -- common/autobuild_common.sh@486 -- $ date +%s 00:39:04.901 12:16:07 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728641767.XXXXXX 00:39:04.901 12:16:07 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728641767.q8RnIK 00:39:04.901 12:16:07 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:39:04.901 12:16:07 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:39:04.901 12:16:07 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:39:04.901 12:16:07 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:04.901 12:16:07 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:04.901 12:16:07 -- common/autobuild_common.sh@502 -- $ get_config_params 00:39:04.901 12:16:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:39:04.901 12:16:07 -- common/autotest_common.sh@10 -- $ set +x 00:39:04.901 12:16:07 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:39:04.901 12:16:07 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:39:04.901 12:16:07 -- pm/common@17 -- $ local monitor 00:39:04.901 12:16:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:04.901 12:16:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:04.901 12:16:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:04.901 12:16:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:04.901 12:16:07 -- pm/common@21 -- $ date +%s 00:39:04.901 12:16:07 -- pm/common@25 -- $ sleep 1 00:39:04.901 12:16:07 -- pm/common@21 -- $ date +%s 00:39:04.901 12:16:07 -- pm/common@21 -- $ date +%s 00:39:04.901 12:16:07 -- pm/common@21 -- $ date +%s 00:39:04.901 12:16:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728641767 00:39:04.901 12:16:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728641767 00:39:04.901 12:16:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728641767 00:39:04.902 12:16:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728641767 00:39:04.902 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728641767_collect-vmstat.pm.log 00:39:04.902 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728641767_collect-cpu-load.pm.log 00:39:04.902 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728641767_collect-cpu-temp.pm.log 00:39:05.162 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728641767_collect-bmc-pm.bmc.pm.log 00:39:06.104 12:16:08 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:39:06.104 12:16:08 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:39:06.104 12:16:08 -- spdk/autopackage.sh@14 -- $ timing_finish 00:39:06.104 12:16:08 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:06.104 12:16:08 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:06.104 12:16:08 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:06.104 12:16:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:06.104 12:16:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:06.104 12:16:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:06.104 12:16:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:06.104 12:16:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:06.104 12:16:08 -- pm/common@44 -- $ pid=2277735 00:39:06.104 12:16:08 -- pm/common@50 -- $ kill -TERM 2277735 00:39:06.104 12:16:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:06.104 12:16:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:06.104 12:16:08 -- pm/common@44 -- $ pid=2277736 00:39:06.104 12:16:08 -- pm/common@50 -- $ kill -TERM 2277736 00:39:06.104 12:16:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:06.104 12:16:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:06.104 12:16:08 -- pm/common@44 -- $ pid=2277738 00:39:06.104 12:16:08 -- pm/common@50 -- $ kill -TERM 2277738 00:39:06.104 12:16:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:06.104 12:16:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:06.104 12:16:08 -- pm/common@44 -- $ pid=2277762 00:39:06.104 12:16:08 -- pm/common@50 -- $ sudo -E kill -TERM 2277762 00:39:06.104 + [[ -n 1597308 ]] 00:39:06.104 + sudo kill 1597308 00:39:06.115 [Pipeline] } 00:39:06.129 [Pipeline] // stage 00:39:06.133 [Pipeline] } 00:39:06.147 [Pipeline] // timeout 00:39:06.151 [Pipeline] } 00:39:06.165 [Pipeline] // catchError 00:39:06.169 [Pipeline] } 00:39:06.183 [Pipeline] // wrap 00:39:06.189 [Pipeline] } 00:39:06.201 [Pipeline] // catchError 00:39:06.210 [Pipeline] stage 00:39:06.212 [Pipeline] { (Epilogue) 00:39:06.225 [Pipeline] catchError 00:39:06.226 [Pipeline] { 00:39:06.239 [Pipeline] echo 00:39:06.240 Cleanup processes 00:39:06.246 [Pipeline] sh 00:39:06.538 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:06.538 2277880 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:06.538 2278432 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:06.552 [Pipeline] sh 00:39:06.917 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:06.918 ++ grep -v 'sudo pgrep' 00:39:06.918 ++ awk '{print $1}' 00:39:06.918 + sudo kill -9 2277880 00:39:06.965 [Pipeline] sh 00:39:07.255 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:19.495 [Pipeline] sh 00:39:19.891 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:19.891 Artifacts sizes are good 00:39:19.907 [Pipeline] archiveArtifacts 00:39:19.915 Archiving artifacts 00:39:20.060 [Pipeline] sh 00:39:20.350 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:20.365 [Pipeline] cleanWs 00:39:20.376 [WS-CLEANUP] Deleting project workspace... 00:39:20.376 [WS-CLEANUP] Deferred wipeout is used... 00:39:20.384 [WS-CLEANUP] done 00:39:20.386 [Pipeline] } 00:39:20.403 [Pipeline] // catchError 00:39:20.415 [Pipeline] sh 00:39:20.703 + logger -p user.info -t JENKINS-CI 00:39:20.714 [Pipeline] } 00:39:20.727 [Pipeline] // stage 00:39:20.732 [Pipeline] } 00:39:20.746 [Pipeline] // node 00:39:20.751 [Pipeline] End of Pipeline 00:39:20.804 Finished: SUCCESS